What happens if Silicon Valley’s AI investment bubble bursts?

US tech giants are burning through unprecedented amounts of capital to develop artificial superintelligence, but authoritarian regimes around the world could keep the pursuit of a techno-utopian future alive when the bubble subsides

The drumbeat of news around advances in artificial intelligence (AI) has become impossible to ignore. And yet it still doesn’t capture the colossal resources necessary to sustain this trend. Silicon Valley’s pursuit of artificial general intelligence (AGI) especially demands the hyper-scaling of everything – training data, compute power, semiconductor manufacturing, energy grids and more.

But there’s a glaring problem: America’s tech giants are struggling to truly monetise their inventions.

Silicon Valley’s vanguards are instead leaning on messianic visions around AI’s unrealised potential to underwrite their technological arms race. Complicating matters further are nascent trade wars, surging protectionism, finite global resources and knotted supply chains. The Trump presidency is busy gutting key aspects of America’s innovation ecosystem and imperilling US companies’ access to global markets. And upstart Chinese firms are churning out free open-source models rivalling the latest offerings from OpenAI, Google, Microsoft and Anthropic.

Make no mistake: America’s AI industry is still benefitting from an investment frenzy. Startups and established companies raked in some $227bn last year in combined venture capital and private funding. As publicly listed companies, behemoths Amazon, Google, Meta and Microsoft also raise revenue through purchases of their shares.

However, experts warn investors may eventually grow disenchanted with tech firms’ drawn-out path to profitability. An analysis by Goldman Sachs argues the largest US tech companies could spend $1tn on AI in the near future with little to show for it. The European Central Bank then warned in November 2024 in its bi-annual review of global economic risks that AI stocks could crash if investors’ earning expectations weren’t met soon.

America’s tech giants briefly lost $650bn in market value on a single day last August when Wall Street melted down as investors spooked by a possible US recession fled to safer assets.

“If you’re going to get ‘invest now’ and get returns in 10 to 15 years, that’s a venture investment, that’s not a public company investment,” one financial analyst told CNN at the time. “For public companies, we expect to get return on much shorter time frames. We’re not seeing the types of applications and revenue from applications that we would need to justify anywhere near these investments right now.”

This underscores how quickly the investment appeal of Silicon Valley could collapse if funders’ short-term earning needs decouple from tech companies’ long-term mythmaking. Such a severe market correction would be a major buzzkill for America’s AI industry. But it wouldn’t trigger a death spiral, either.

Tech leaders would simply be forced to seek out other means to bankroll their quest to achieve AGI.

The race – and pitfalls – on the path to superintelligence

It’s important to recognise that epic levels of AI industry hype isn’t pinned totally on vapourware. AI uptake is growing concretely worldwide as applications permeate every sector of the global economy.

OpenAI says ChatGPT alone now has some 400 million weekly users of ChatGPT – two million of those are business customers with a monthly paid subscription. Plus, the industry is innovating as researchers pivot toward reasoning models and synthetic data to improve models’ efficiency and accuracy.

The rise of AI agents is also apparent, with Google DeepMind claiming an agentic coding system built using Google’s Gemini large language model (LLM) has proven capable of generating novel algorithms that could lead to breakthroughs in science and mathematics. Virtually all the CEOs of Silicon Valley’s leading AI companies and most industry insiders thus remain confident AGI will be achieved before 2030.

But tech companies are meanwhile burning through more capital than ever. Analysis by the research group Epoch AI indicates the computing requirements of frontier models created by Meta, Google, xAI, OpenAI and Anthropic are using roughly 10 billion times more compute power than leading models did fifteen years ago – and that’s even as they become exponentially more efficient.

The cost of training new frontier AI models now eclipses $1bn once hardware, energy, labour and data needs are accounted for. Anthropic’s CEO Dario Amodei thinks this might reach $100bn by 2027. Elsewhere, OpenAI CEO Sam Altman estimates a $7tn expansion of the semiconductor industry is needed to achieve AGI. America’s tech giants alone could reportedly spend more than $320bn on AI just this year – equivalent to the GDP of Portugal. 

Yet serious obstacles to profitability still exist. The utopian visions for AI held by tech accelerationists discount the vagaries of consumer behaviour and state power, for example. Companies aren’t all uniformly keen to disrupt their existing business models. Energy infrastructure is lacking and notoriously difficult to build quickly.

Silicon Valley firms also face a litany of legal cases over alleged copyright infringement, stemming from accusations they harvested training data without creators’ consent. And public distrust in AI systems is rising globally, driven by anger and suspicion over the toxic legacy created by the use of engagement-driven revenue models on social media platforms.

All this has contributed toward signs of an AI investment bubble emerging since at least mid-2023. Nearly two years later and the industry’s financial outlook remains cloudy.

A recent IBM survey of 2,000 chief executives indicates most of them feel AI is still underperforming. Amazon, meanwhile, is reportedly only seeing a 20 cents return on every dollar it spends on the tech. Microsoft has suspended plans for some of its new datacentres, overestimating demand for its cloud and AI services.

The latest tests are showing reasoning skills may boost AI’s capability for deception, and even blackmail, as a form of self-preservation

There is also growth in so-called “Zombiecorns” – once promising AI companies struggling to fundraise given shaky business models and an absence of earnings. OpenAI and Anthropic claim revenue projections of tens of billions of dollars within a few years; both companies still lost at least $5bn last year.

A survey of 500 AI researchers earlier this year found that 76% of them doubted AGI could be achieved using current scaling methods. Emulating the nuances of human intelligence – empathy, inference and adaptability – has proven enormously difficult. This explains the turn toward reasoning models, which are given more time to work through queries in a systematic, reflective way, rather than being programmed to generate instantaneous answers.

Yet Altman has admitted that an upgrade to OpenAI’s latest reasoning model, GPT-4o, displayed sycophantic tendencies before it was pulled. The latest tests are showing reasoning skills may boost AI’s capability for deception, and even blackmail, as a form of self-preservation.

What’s more, Silicon Valley must now contend with increasingly capable competition.

In January, the emergence of DeepSeek – a Chinese AI company born out of a hedge fund – shocked the world. By releasing a high-performance chatbot it supposedly built for only a few million dollars, DeepSeek upended two long-held assumptions about AI.

First, that the scale of resources thought necessary to train and operate cutting-edge models may be partly overblown. Second, the future of AI might not be completely dominated by large American tech companies after all. At least 10 more Chinese firms have since released open-source AI models generally on par with the costly top-tier subscription models offered by Silicon Valley.

Under closer scrutiny, it’s clear that DeepSeek has significant flaws and security concerns. Nevertheless, the company has provided a glimpse of a more democratised AI future.

Soon, the proliferation of lower-cost, open-source models that entrepreneurs and developers can freely use, modify and share could usher in a more even playing field, benefitting smaller actors and jurisdictions.

Investors are also surely noticing how startups elsewhere are building advanced AI models over the internet using crowdsourced compute power and untapped data sources. This raises the question of how long money managers will support huge outlays on private AI infrastructure if innovative workarounds are proving viable.  

Yet even if the AI bubble bursts, it doesn’t mean the end of Silicon Valley’s quest for AGI.

AGI by other means

OpenAI provides the clearest example of how ostensibly AI-first companies might evolve to survive. “The most powerful tech companies succeed not simply by the virtues of their individual software and gadgets, but by building ecosystems of connected services,” writes journalist Matteo Wong.

“You could think of this OpenAI as yet another tech company following in the footsteps of Meta, Apple, and Google – eager not just to inspire users with new discoveries, but to keep them locked into a lineup of endlessly iterating products.”

Indeed, under Altman, OpenAI has modified its non-profit governance structure to enable itself to extend its reach and diversify its revenue streams. The company recently added a shopping feature in ChatGPT. It is also exploring options to build a browser, social network and even hardware.

No longer content to transcend the state, techno-utopians now seek to capture it – repurposing public power to advance private ambitions
Ian Bremmer, Eurasia Group

Anthropic likewise recently agreed to work with data analytics firm Palantir and Amazon Web Services to sell products based on its LLMs to defence contractors.

Another option already being pursued by Silicon Valley is to forge a symbiotic relationship with the US government.

“No longer content to transcend the state, techno-utopians now seek to capture it – repurposing public power to advance private ambitions,” political scientist Ian Bremmer, head of the Eurasia Group, wrote in a recent essay. “Unlike earlier digital platforms, which blossomed under minimal government intervention, most of today’s frontier technologies – such as aerospace, AI, biotech, energy and quantum computing – actively require implicit or explicit state backing to scale up.”

Others agree. “I think the Trump administration will actively support and defend their big tech platforms and see efforts at restraining them through regulation and taxation as counter to America’s national interest,” André Coté, interim executive director of The Dais think tank in Toronto, said in an email. “There is activity and push-back at the state level though, from other interest groups – it’s tough to predict where this goes.”

One direction it could go is that Silicon Valley will increasingly look abroad to autocratic regimes for support. OpenAI has already become central to the goal of Gulf monarchies to transform themselves from digital backwaters to global tech hubs. In turn, OpenAI benefits by gaining access to the former petrostates’ bottomless sovereign wealth funds.

Elsewhere, a Meta whistleblower told a US Senator hearing in early April that during her time with the company she witnessed its executives allegedly offer to help Beijing build censorship tools if it meant gaining access to the lucrative Chinese market.

None of this is a foregone conclusion. But it signals that if the AI investment bonanza of the past few years does come to an end, it may only become a footnote in Silicon Valley’s crusade to engineer the future.

Read more about artificial intelligence

Read more on Artificial intelligence, automation and robotics