top of page
Writer's pictureSharon Gai

Why AI now feels like the cloud back in the day





“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way--in short, the period was so far like the present period that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.”

 

Sounds like 2024, doesn’t it?

 

But it also sounds like 2010.

 

The year was 2010. IT departments were still buying servers and switches. Data centers were in house. Big and small companies were hunting for data center space or colocation. Then this big announcement comes. It’s… drum roll… the cloud.

 

The cloud?

 

That thing up there? I live in California, we don’t even have any.

 

Okay, I’ll stop with the bad jokes.

 

Yes, the cloud changed the IT industry and business as we know it. People no longer wanted to own any of their own hardware. Business models changed too. Whereas before software would be sold to you at couple hundred dollars a license, the SaaS area carried with it a “rent” model.

 

Similarities between Early Cloud Computing and Modern Day AI

 

What’s happening in AI reminds me of the year of the cloud migration. It was this big, inevitable thing, and there were the naysayers. There's a familiar chorus of skepticism—concerns over migration costs, the redundancy of existing systems, and the implications for staffing.

 

Just look at all the similarities.

Concerns in Early Cloud Computing

Comparable Concerns in Current AI

Security Concerns

Data Privacy and Security Risks

Loss of Control

Ethical and Decision-Making Control

Reliability and Downtime Worries

Reliability and Unpredictability of AI Systems

Data Sovereignty and Compliance Issues

Compliance with AI Ethics and Regulations

Performance Concerns

AI Performance and Accuracy Issues

Cost Uncertainty

AI Implementation and Maintenance Costs

Limited Understanding and Expertise

Lack of AI Literacy and Skilled Workforce

Cultural and Organizational Resistance

Resistance to AI Adoption due to Job Displacement Fears

 

But the crux here is eventually we did it. It took about 2 years for us to familiarize ourselves with it, and another 5 to truly take off. And now, few new companies will have “a physical server” as one of the costs it has on its balance sheet.

 

Cloud did its magic by slowly creeping in. It first started in IT and then it crept in everywhere else. Sales and marketing departments leveraged CRM systems hosted in the cloud, while Human Resources adopted online talent management platforms. Finance teams shifted to cloud-based accounting software, and even the most traditionally 'physical' departments like manufacturing and logistics began managing supply chains and inventories through cloud services.

 

I predict that the same will happen in AI, except this time, it doesn’t start in IT. Instead, it’s coming at IT from all sorts of different angles.

 

Here are some ChatGPT fun facts:

 

·      ChatGPT currently has 100+ million global users, and the website sees nearly 1.5 billion visitors per month.

 

·      While ChatGPT was free until February 2023, the company released ChatGPT Plus at $20 per month.

 

·      ChatGPT is trained using GPT-3.5, whereas the Plus users can access GPT-4 from March 2023.

 

·      USA has the highest number — 14.82% — of ChatGPT users, followed by India (8.18%).

 

·      A quarter of companies had saved roughly $50,000 to $70,000 using ChatGPT.

 

·      55.99% of ChatGPT users are male, while the AI chatbot has 44.01% female users.

 

·      ChatGPT has a bounce rate of 36.36%. Each user spends around 7 minutes 36 minutes on the website and views 4.17 pages per visit.

 

The Challenges of AI Tool Sprawl

 

Today, IT is the department trying to catch up more than ever because now, people are experimenting on their own. An even bigger shift is they’re experimenting at home, and bringing it into work. So many of the AI companies I’ve spoken to recently started in B2C but are trying to figure out their go-to-market strategy in B2B.

 

One of the biggest challenges for IT departments is simply keeping track of all the different AI tools that are being used within an organization. With employees often adopting new tools without IT's knowledge or approval ("shadow IT"), it can be hard to get a handle on the security risks and compliance implications. This means we’ve transitioned from Shadow IT to Shadow AI.

 

From Shadow IT to Shadow AI

 

Shadow AI, however, is worse than Shadow IT. At least in the shadow IT world, it was the testing of certain SaaS tools – did this PM software have a better UX than that one? Did this CRM give me better results than the other? In the new AI world, however, Shadow AI is using internal company data, training on top of it, and then blasting it to be used for other users, possibly competitors to the company that gave it information.

 

Samsung banned employees’ use of popular generative AI tools like ChatGPT after discovering staff uploaded sensitive code to the platform. The list goes on from there.  

Thus far there are 14 large companies who have restricted ChatGPT to be used on company computers altogether.

 

·      Accenture: Restricts usage for client work due to data privacy concerns.

·      Amazon: Limits employee access to specific areas requiring high security.

·      Goldman Sachs: Implemented usage guidelines and requires approval for specific applications.

·      Verizon Communications: Has internal restrictions due to potential data risks.

·      Apple: Reportedly discourages employee use, though not a complete ban.

 

Most companies are scrambling to draft policies regarding the use of something like ChatGPT. Half of human resource leaders polled by consulting firm, Gartner, said they’re in the process of formulating guidance on employees’ use of any Gen AI product.

 

Even if, as an employee, you get the approval to purchase a set of seats for an AI tool, it will take months before the request is approved by all the departments. So, it’s much easier for employees to just use it in secret.

 

How IT Departments Are Responding

 

So, how are IT departments coping with the AI tool explosion? Here are a few of the strategies they are using:

 

·      Developing AI governance policies: Establishing clear guidelines for how AI tools can be used within the organization is essential for mitigating risks and ensuring compliance. These policies should address issues such as data privacy, security, and bias.

·      Creating a central repository for AI tools: This will help IT track which tools are being used, who is using them, and for what purposes. It can also make it easier to manage and update the tools.

·      Investing in training and education: IT staff need to be trained on how to manage and secure AI tools. Employees also need to be educated on the responsible use of AI, so they understand the potential risks and limitations.

 

When IT becomes HR

 

In the future, instead of provisioning a software seat to a user, the IT department might be provisioning a whole digital intern. IT is going to act more and more like HR, and business leaders will decide between increasing headcount or buying an external “AI bot”. Companies like Artisan is already masking a whole role with a bot.  




 

Here are some implications and considerations of this transition:


  1. IT and HR Collaboration: IT departments would likely collaborate more closely with HR to understand the specific needs of each department, ensuring that the AI bots are tailored to meet those needs. This collaboration could lead to a more holistic approach to workforce management, blending technical and human resource skills.

  2. Decision Making: Business leaders would need to weigh the benefits of hiring additional human employees against deploying AI bots. Factors like the complexity of tasks, the need for human judgment, and the cost-effectiveness of AI solutions would play a significant role in these decisions.

  3. Training and Development: Just as humans require onboarding and training, AI bots would also need to be 'trained' or programmed to perform specific tasks. IT departments would likely take on a role similar to that of a trainer or mentor, continually updating and refining the bots' capabilities.

  4. Ethical and Legal Considerations: The use of AI bots in place of human workers raises ethical questions, particularly regarding job displacement. Additionally, there would be legal considerations related to liability, data privacy, and the extent of AI autonomy in decision-making processes.

  5. Cultural Shifts: Integrating AI bots into the workforce would necessitate a cultural shift within organizations. Employees would need to adapt to working alongside AI, which could involve changes in workflow, communication, and team dynamics.

  6. Technological Advancement and Maintenance: Keeping up with the rapid pace of technological advancement would be crucial. IT departments would be responsible for ensuring that AI bots are up-to-date, secure, and functioning optimally.

  7. Performance Measurement: Developing metrics to evaluate the performance of AI bots, in comparison to human workers, would be essential. These metrics could help in making informed decisions about the roles and tasks best suited for AI versus human employees.


This shift represents a fascinating intersection of technology, human resources, and business strategy, and it will be interesting to see how it unfolds in the coming years.



5 views0 comments

Recent Posts

See All

Comentários

Avaliado com 0 de 5 estrelas.
Ainda sem avaliações

Adicione uma avaliação
bottom of page