Over the past week discussions of an AI bubble have become very prominent. Business leaders, investors, high profile commentators have been publicly pondering the possibility that AI’s supremely elevated value might overshoot. Observers including myself have been raising this idea for years in different ways. Now things feel like they’ve built to a crescendo of voices, warning of a gigantic technological collapse.
If there is an AI bubble, and it pops, how might that play out? And what may come next, if so? In this post I’ll introduce the bubble argument, then offer several short scenarios about how it might unfold.
These scenarios rest on some readings of technological history, along with observations of the present. They occupy the short and medium term future horizons, from the near future out to around five years. Drivers of change I’m especially interested in here concern government actions, cultural attitudes, geopolitics, and, of course, macroeconomics.
I rule out several things here. I’m assuming no nightmare scenario occurs whereby AI does terrible things to humanity. There is no artificial general intelligence breakthrough in this time frame. No clear successor to the LLM architecture appears at scale. The focus is on AI now as it hits a (putative) bubble.
Economic bubbles burst when the value of a thing drops quickly, after it has been shooting up for a while. After a peak an item’s price rises for a moment but nobody wants to pay it, and then the price starts to decline, then to fall. People tend to view the last stage of the price inflation as driven by speculation, poor analysis, hype, or frenzy. One classic example is the 17th century Tulip mania, where people invested ever-increasing funds into what appeared to be a very exciting plant, only to wake up and realize they’d massively overvalued it. In living memory we might think of the dot com collapse around 2000 (a/k/a “dot bomb”) or how we recently passed peak streaming video services.
(There’s also an argument that the US economy is basically flat, if you remove the AI boom. Possibly the finance capital now firehosing into AI would redirect elsewhere, but this is another topic.)
If you’d like to look into more analysis on the subject, I recommend Noah Smith’s post. He takes the discussion further with his characteristic combination of solid research and clear writing.
Now how might this play out over the next few years? Here are some possibilities.
Extinction Event
AI’s value crashes quickly and deeply. Investors lose historically vast amounts of capital. OpenAI and Anthropic go out of business or just shrink to something very marginal. Google, Microsoft, Amazon, Apple cut back their services and eat the losses. A new AI winter appears as the field collapses for years.
As a result, the American economy goes right into recession, based on the huge capital losses and the collapse of so many loans. Other nations also suffer to the extent they invested in failed enterprises. Meanwhile, finance slides capital into other causes, like biotech, health care, and private space exploration (launches, mining, satellites), gradually spurring economic growth in a year.
Governments around the world turn away from AI in different ways. Politicians who once boosted the technology either fall carefully silent or attack it. Those who were skeptical proclaim their foresight. State funding levels drop. Militaries end contracts. President Trump mocks AI leaders as fools and con artists.
Culturally, we sour on the whole AI enterprise. People quietly or performatively drop creating content with AI. The collapse accelerates anti-AI backlash, with critics and opponents feeling vindicated. LLM leaders and boosters appear as villains or clowns in movies, tv, songs, computer games, and stories. Then, given short attention spans, we move on, culturally consigning AI to the dustbin of failed inventions.
There are interesting precedents for such deep collapses. Pneumatic tubes became an exciting technology in 19th century Europe, competing with the telegraph for communications and mobile humans for delivering stuff, but “les pneus” collapsed badly, broadly disappearing from life. I also think of how America’s human spaceflight enterprise went from the Apollo triumph and space shuttle successes, then stopped altogether for decades, reduced to renting seats on old Russian spacecraft just to get into low Earth orbit.
Jagged Infrastructure
In this scenario the LLM crash happens but AI work keeps going. How? The knowledge of how to create and maintain LLMs is now largely public, embodied in a large number of open source projects. There is also a lot of infrastructure now established which new projects can use. Perhaps cloud service costs start to decline, even for GPUs, after a peak. In short, the AI industry rebounds quietly, without so much furor this time.
Investors and government officials are chastened, but resume support in quieter terms than they once used. No longer is AI an existential threat to be wrestled with or a global revolutionary forces. Instead, finance and politics resume their embrace of LLMs as necessary infrastructure for the 21st century.
The model here is the bust and boom experienced by fiber optic networks around 2000, where people spent fortunes to pay networks underground then flopped. Another antecedent are the repeated railroad crises in the late 1800s, when startups used immense amounts of capital to lay tracks across the nation and repeatedly went bust, often in dubious circumstances. In both cases after the bubbles popped positive material achievements remained. Dark fiber was in the ground and railroad tracks upon it, ready for use. Bubbles inflate and pop but infrastructure elements are available for subsequent development.
We could think of the dot.com bubble bursting this way, too, as the 1990s saw the build out of web infrastructure: HTML, the browser, servers, growing awareness of web design know how, etc. The web grew after the bubble burst, and for the 2000s it saw some unevenness. We might think of Flash’s rise and fall, or the rise and demise of so many web editors before web 2.0 really settled in.
(See also “Tarnished Persistence” below)
Governments Take Command
The bubble pops and we power through the crisis via the force of state industrial policy. Governments are frustrated by the economic overshoot but think that AI’s strategic value remains and so they keep funding it. China, America, the European Union, Britain, the rich Middle Eastern nations step up to support the LLMs movement. They impose some new regulations and agendas, partly in an attempt to ward off another flop.
Public funding increasingly replaces private in the United States and some European nations. China and the Eurozone also see more governmental efforts to encourage businesses developing AI and its uses. States take a role in identifying, supporting, and promulgating productive uses of AI.
Culturally there are some interesting possibilities. We might see Americans pay less attention to AI since the government has it now and therefore it is dull. People in other nations might rethink AI as part of their national identity, either complaining about it accordingly or taking pride in it, depending on circumstances. Critics may charge AI with enabling governmental policies and attributes they disagree with, such as militarism, surveillance, corruption, or waste.
My model here is the American space program once more, because it was and is almost never popular. NASA has usually polled badly in the face of public disinterest and opposition. A steady stream of popular criticism expressed a range of arguments against space exploration, from “Whitey on the Moon” to anti-SpaceX sentiment. But Washington maintained support for space over decades, seeing it as vital to American interests: military, reputation, scientific, economic (in terms of spinoffs). The federal government managed multiple private entities in producing hardware and systems, much as the Chinese state is doing now with its energetic space effort.
Tarnished Persistence
As an alternative to Jagged Infrastructure (see above), after the bubble collapses the AI industry shambles along, mixing bursts of innovative activities with misfires and defeats. Money sloshes in and out of different projects. Overall, the sector grows over the next few years, albeit in stops and starts.
Governmental attitudes are all over the place, varying by nation, region, and political ideology. Some figures will try to rein in the dangerous technology through policies and legal action. Others will embrace AI for its exciting, outlaw nature. Regulation will become more intense, reaching the levels of policies concerning atomic power or biotech.
Culturally, we tend to view post-bubble AI as a misfit. In this view it’s a sector riddled with scams and dangers, with shadowy actors and strange projects. Legal actions occur against some AI actors. Yet there’s also some affection for the rascal, its ability to attract money, its eager welcome for new participants, its manic creativity. New ideologies appear, building on posthumanism, effective altruism, accelerationism, and various forms of artificial general intelligence aspirations and dread. Celebrities with alternatively promote or war against the tech with personal reputations and fortunes at stake.
Readers might guess that my model here is bitcoin, with its ragged rise punctuated with disasters, but with an overall upwards direction. There is lots of public disdain for it along with some love. It has a bad reputation yet keeps attracting money and demonstrating creativity.
The dragon’s turn
The AI bubble pops in many countries, but not in China. Earlier I mentioned the possibility of Trump backing AI at the governmental level even as the private sector sours on it. Here I imagine that he doesn’t, that he follows his usual practice of dumping anything and anyone smacking of loserdom. In contrast, European states lack the capacity to take the reins, especially as they devote more resources to remilitarizing against Russia.
In contrast China’s Xi Jinping stays the course, deeming AI to be a strategic necessity. Beijing continues to direct lavish support to that nation’s very energetic AI industry. Gradually Chinese research, development, products, and services become the leading ones in the world. AI suffuses China, but Chinese AI also appears in other countries’ services, perhaps as they participate in One Belt One Road, or because they see a good deal. Propaganda embeds Chinese AI within the national character, a source of pride, one more sign of the nation’s peaceful rise to global leadership. Every so often Beijing takes down an AI leader who appears too arrogant, reminding everyone who’s leading this tech revolution.
As this becomes evident some in Europe and America simply turn away, thinking generative AI a lost cause and China to be wasting resources on it. Others argue that LLMs are now aligned with the Chinese Communist Party, and think the world should resist it on ideological and geostrategic grounds.
In the cultural realm, AI figures in prominent and positive ways in Chinese stories and art. LLM inventors appear as heroes of the intellect and the economy, perhaps roguish at times, otherwise supporting Confucian virtues. We should see various AI tools and personas teaching Xi Jinping thought. Other countries which participate in Chinese AI may not make much of it culturally, or could portray the tech as exciting with a cosmopolitan flair. Countries which reject the Chinese technology may just not express it at all, culturally, or they might feature Beijing-aligned technology people and organizations as villains either formidable or deluded.
We can think of these scenarios as different pathways the world might take. We could also pick one or several which we’d like to achieve and plan towards it. Alternatively, we might select a scenario which we dread and figure out what’s needed to avoid its occurrence.
It’s also worth thinking about another classic feature of scenario thinking. When considering a set of possible futures, rarely does one come true and all others fail. Instead, reality offers a messy mashup. Sometimes multiple scenarios or elements thereof do occur, just spread across societies and/or over time. One region or population heads into one future while others enter another, or we cycle through several in a row.
Over on Mastodon Dave Wilburn offers one such take in response to an earlier, very truncated version of the above:
I feel like it’s going to be “all of the above”. The investor bubble will collapse under the weight of the industry’s financial schemes and lackluster revenue. And it will survive in niches here and there where it actually has some usefulness alongside conventional ML [machine learning]. And governments and defense departments will continue chasing the wild goose, both out of an interest in greater autonomy and to corruptly funnel cash to favored oligarchs and regime supporters. And the same credulous idiots and grifters that gamble on cryptocurrency will continue to do so with GenAI. And at the end of it all, maybe some of us will be lucky enough to be able to afford a decent graphics card for gaming again.
What do you think of this passel of scenarios? Do you envision one in particular as more likely than the others to occur? How might several play out simultaneously?





Your Dave Wilburn quote hints at one likely implication of all the bubble burst scenarios: advanced computing hardware and cloud services costs will collapse from oversupply. Cloud-based gaming has been waiting for the combination of low latency (lots of regional data centers) and low cost graphics-ready server farms for at least 5 years. Similar compute-heavy applications would also benefit: video generation & editing, computational modeling, and expansion of use of scientific neurocomputing for materials science, biotech, engineering, etc. The AI bust could fuel a general boom due to cheap compute resources.
Dave's comment gets the zeitgeist. However, I suggest keeping an eye on Ali Cloud ecosystem as the CPC gets behind turning it into an Open World Complex.