Enterprise CIOs have become fascinated by Big Tech's claims about autonomous AI systems that can solve any problem.
Why it matters: The complexity of large models often leads to errors, hallucinations, and rising costs.
- All the major model developers—OpenAI, Microsoft, Google, Amazon, Anthropic, Perplexity, and others—promote the idea that larger models are inherently more effective.
- In contrast, smaller models may offer greater controllability and reliability.
Between the lines: What we are seeing is that enterprises often opt for the path of least resistance.
- If the large model maker promises to solve all their problems, they want to believe it.
- However, it is often the smaller and more focused strategies that deliver better results.
Zoom in: The main issue is that companies often consult vendors before examining their internal processes for solutions.
- Even Microsoft has acknowledged that smaller models can perform much better than larger ones.
- However, one of their AI executives stated that smaller models are effective for enterprises only if the CIO's team has dedicated time and effort to developing a clear AI strategy.
- For IT leaders who have not yet defined their AI goals, there are still valid reasons to consider the largest models.
Context: We have found that the first step toward a successful AI initiative is clear communication and practical training.
- Before searching for potential use cases, it's essential to ensure that the entire company understands AI.
- Additionally, open lines of communication should be established, allowing staff to identify the problems that AI can address.
A better way: Start with small projects where the consequences of failure can be educational rather than catastrophic.
- It's crucial to use feedback from these "fail-fast" experiences to learn how to achieve larger goals.
- This approach will help the team avoid unthinkingly following a vendor's advice and instead focus on what is best for the company.
By the numbers: A recent survey revealed that three-quarters of employers expect their staff to use AI in some capacity.
- Approximately 50% of these employees are expected to officially use AI, while another 25% engage with it informally.
- However, many employees lack proper training in AI literacy or access to high-quality enterprise systems.
- As a result, over 22% of employees report using AI in situations where they feel uncertain.
- A report from KPMG indicates that two-thirds of these workers accept AI-generated outputs without validating them.
In contrast: the anxiety surrounding the use of AI results in some paradoxical outcomes.
- For instance, some individuals pretend to use AI because they feel anxious about it, while others use it but pretend they do not.
- According to Slack's Workforce Index, released last October, a survey of over 17,000 global desk workers revealed that 48% felt uncomfortable admitting to their managers that they use AI at work.
- Many people expressed that using AI felt like cheating.
Go deeper: All the problems with Enterprise AI can be traced back to people and data. A comprehensive training program, like those offered by Todd Moses & Co., can help eliminate most of these problems.