When everyone digs for gold, sell shovels. This old maxim captures the same logic as today’s artificial intelligence boom: the profits are flowing to those supplying the chips and infrastructure, not to the loss-making labs building the models.
Nvidia’s record earnings this past year, which pushed the chip maker’s market value past $5 trillion, show how profitable that trade has become.
But what if the world is selling the wrong kind of shovel? Analysts expect the global build-out of AI infrastructure – from data centres and chips to power and cooling – to cost several trillion dollars over the next few years. Much of that hardware is being built to run large language models (LLMs) such as OpenAI’s ChatGPT or Anthropic’s Claude.
But here’s the rub – if development moves towards smaller or more efficient AI, or if new technology reduces the demand for high-end chips or the large amounts of energy they require, much of that heavy infrastructure could end up underused, even uneconomic. The world is pouring trillions of dollars into one version of AI and risks hard-coding that choice into the global economy. It is, in effect, putting all of its eggs in one basket.
Evidence of that risk is emerging. Training the most advanced models is now astronomically expensive, yet recent generations have delivered smaller improvements for far greater cost. GPT-5, for instance, required hundreds of thousands of Nvidia chips, but delivered only modest gains in performance.
If the returns on scale keep flattening, the world risks locking itself into an AI system that may never earn back the cost of the hardware it depends on. Yet funding and research are increasingly concentrated on LLMS, which are built on the same underlying transformer architecture that powers most large language models.
Smaller models and alternative technology attract far less. So just when the field could benefit from diversity, it is narrowing instead. Research in areas such as liquid neural networks or neuro-symbolic AI may slow as funding and talent continue to flow towards transformer models.
History could be repeating itself. In the late 19th century, American railroads laid far more track than many could ever hope to fill. It was often more profitable to build new lines than to run them – a boom that ultimately ended in bust.
A century later, telecoms groups spent billions laying fibre-optic cables to meet forecasts for explosive internet growth – forecasts that proved far too optimistic. Much of that capacity sat idle for years, leading to bankruptcies.
Is the AI boom another classic case of speculative overbuilding? Some cracks are visible. When Google last month launched Gemini 3 – a chatbot widely seen as surpassing OpenAI’s – it trained the model on its own tensor processors rather than Nvidia’s chips, sending the company’s shares sharply lower.
It underlined how quickly assumptions behind AI infrastructure can change. Earlier this year, China’s DeepSeek achieved cutting-edge performance with its R1 model, using none of Nvidia’s costly, power-hungry processors.
If other companies move to different hardware or more efficient models, much of today’s AI infrastructure could prove uneconomic. Data centres, chip plants and power systems built for current LLMs would be hard to repurpose. The trillions invested might not disappear, but the returns could.
The risk is not just overbuilding, but letting too much power and money concentrate in too few companies. Since late 2022, the AI rally has pushed tech valuations to record highs – a run the European Central Bank says is now being fuelled by as much by “FOMO” as reality.
Those gains are highly concentrated. Eight of the 10 biggest stocks in the S&P 500 are tech companies, together worth more than a third of that entire US market. That leaves investors exposed to sharp losses if the market sours.
The risk is systemic. So much capital and market value are tied to a handful of companies – and to one model of AI – that any shock would reverberate through the global economy.
That dependence is clearest at OpenAI. The ChatGPT maker has lined up deals that could see it spend more than $1 trillion on computing power, financed largely by other big tech groups. Those partnerships are creating an insidious web of financial ties that risk locking the AI industry into one set of technologies and suppliers.
At some point, all this eye-popping spending will have to prove its worth. So far, the payoff remains uncertain. MIT research suggests 95 per cent of AI projects deliver no returns, yet money keeps pouring in; no company wants to be left out of what could be the next industrial revolution. But few have shown lasting productivity gains from generative AI and most are still experimenting, even as development costs climb.
The world needs to hedge its AI bets. Governments, investors and companies should avoid tying their AI strategies to the same few suppliers or technologies. The priority should be adaptable systems that can evolve as AI technology changes. Organisations may consider modular data centres that can be repurposed if demand for AI infrastructure declines. At the same time, it would be prudent to invest in other core technologies with long-term potential.
Investors, meanwhile, should look beyond the Magnificent 7 – Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla – and back smaller research labs developing different approaches. For one thing, open-source AI deserves far more attention. By sharing code and training data, these models help lower development costs, speed up innovation and widen access to generative tools.
China is backing open models that anyone can use or adapt, while Europe’s Mistral is a leader in open-weight systems. Yet this approach still needs far stronger support from governments, investors and the tech industry. Mistral, valued at nearly €12 billion ($14 billion), remains Europe’s best hope of challenging US rivals, but it still operates at a fraction of their scale.
For now, most of the money is still chasing the same few companies and the same type of infrastructure. In the rush to sell shovels for the AI gold rush, we may be building the wrong kind – and paving the way for another costly correction.
Amit Joshi is professor of AI, analytics and marketing strategy at IMD


