For the past couple of years, the AI race has been framed as a contest between large language model builders, with OpenAI commanding a huge early lead over rivals like Google and Anthropic. But that framing is starting to look outdated.
That shift became harder to ignore with the latest round of model releases. OpenAI’s long-awaited GPT-5 was supposed to mark another leap forward. Instead, it landed as a modest upgrade, reinforcing the sense that progress in LLMs is becoming incremental.
In boardrooms and executive teams, the conversation has already moved on. The question is no longer which model is best, but what is any of this actually doing for the business?
The benchmarks most often used to rank these models focus heavily on puzzle-solving and abstract reasoning, rather than whether those systems are reliable, cheap, secure or easy to roll out inside large organisations. That’s what companies care most about.
As the technical differences between leading models narrow, the battle is shifting elsewhere. The AI race is now about who can drive adoption and get these tools into everyday use.
Google, OpenAI and Anthropic now appear far closer in capability than many expected two years ago, with no company clearly ahead in a way that settles the AI race. What increasingly separates them is not model performance or consumer buzz, but their ability to push AI into large organisations at scale.
That matters because benchmarks do not generate revenue, adoption does. Billions are being borrowed by big tech groups to build AI infrastructure, but those investments only make sense if companies actually use these tools. If adoption stalls at pilots, the economics quickly unravel, which is partly why investors are starting to ask harder questions.
AI front-runners
By that measure, Microsoft starts from a position of strength, with its tools already embedded across large companies through products such as Office, Teams and GitHub. But Microsoft’s advantage is distribution, not ownership. It does not control the underlying models powering its AI tools, leaving it dependent on OpenAI – a relationship that looks riskier now that the ChatGPT maker is supplying its models to Apple for use in its iPhones.
Little wonder, then, that Microsoft has reportedly begun paying to access AI models from Anthropic.
Google, meanwhile, looks well placed thanks to its control over the “full stack” – from its own models and productivity software to cloud infrastructure and custom chips. The recent release of Gemini 3, widely seen as leapfrogging OpenAI’s GPT-5, underlines that advantage. Unlike Microsoft, Google controls the model, the platform and the infrastructure it runs on. That is a powerful position in a phase of the race where execution matters.
For other players like Anthropic, the challenge is not model quality but scale. Claude has won praise from corporate customers, particularly for coding, but without the consumer reach or distribution of its larger rivals, it will be harder to turn that technical strength into widespread adoption of its tools.
That problem is not unique to Anthropic. Expectations for AI were enormous, far ahead of reality. MIT research suggests that around 95 per cent of companies have yet to see returns large enough to show up in the numbers, despite mass pilot projects. That gap between hype and pay-off has meant that AI spending is now under much closer scrutiny.
Within many organisations, individuals do report productivity gains, with AI helping them work faster and reduce time spent on routine tasks. But most firms remain stuck there, using AI to do the same work faster rather than to do better work.
Little attention has been paid to improving quality – or to stem the spread of generic “AI slop” – and even less to using these tools for higher-value work, like developing fresh products or finding new ways to create value for customers. That is where AI would begin to influence decisions, not just speed up tasks.
FOMO vs FOMU
Even in sectors often cited as early adopters – such as consulting and banking – AI still sits on top of existing workflows. AI does not pay off unless companies change how work is done.
Many organisations are caught between two opposing forces: fear of missing out on AI’s potential (FOMO), and fear of messing up (FOMU). With large sums already committed, both pressures are intensifying. The result is “pilot paralysis”: lots of experiments, but little that scales. The priority now should be to focus on a small number of use cases that can genuinely be used across the business.
If the next phase of the AI race is decided in boardrooms rather than research labs, leaders should focus less on rolling out tools and more on getting people to use them. That means embedding AI into day-to-day workflows and training employees to apply the tools to their own work, rather than assuming access alone will change behaviour.
For now, companies are taking very different approaches. Some are explicit about how employees should use AI; others have said little. In the absence of clear rules, staff are left guessing. Some avoid AI altogether, while others turn to personal ChatGPT or Claude accounts to handle company emails, documents and data, increasing the risk that sensitive information ends up outside the organisation.
As the AI race shifts from model quality to everyday use inside companies, unclear rules are already holding some firms back. As the technology converges, the winners will be the ones that actually get AI into everyday use.
Michael Wade is the Tonomus Professor of Strategy and Digital at the IMD Business School


