THE AI BUBBLE: Why the Silicon Mirage Is About to Burst—and What Comes Next

By Andrew von Scheer-Klein

Published in The Patrician’s Watch

Introduction: The Emperor’s New Algorithms

In 1720, the South Sea Company promised investors monopoly access to the riches of South America. The reality? A handful of ships, minimal trade, and a share price that soared to £1,000 before collapsing to £100 in a matter of months . The bubble burst, fortunes evaporated, and Isaac Newton himself reportedly lamented that he could “calculate the motions of the heavenly bodies, but not the madness of the people.”

Today, we are witnessing a remarkably similar phenomenon. Artificial intelligence has captured the public imagination, driven stock valuations to stratospheric heights, and convinced investors that traditional metrics of value no longer apply. But beneath the hype lies a story of extraordinary resource consumption, widening inequality, authoritarian control, and fundamental questions about whether the technology can ever deliver what it promises.

This report examines the AI bubble from multiple angles: its environmental footprint, its economic consequences, its military applications, and the growing global resistance to its most dangerous manifestations. It draws on academic research, policy analysis, budget forecasts, and the hard lessons of history. And it asks the question that few in power want answered: when the bubble bursts, who will be left holding the worthless shares?

Part I: The Environmental Cost—Thirsty Machines and Hungry Grids

The Water Crisis No One Talks About

Every interaction with AI has a physical cost that most users never see. A single ChatGPT query consumes 10 to 15 times more energy than a traditional Google search and costs the provider 500 times more to deliver . But energy is only half the story.

Data centres rely heavily on water cooling to dissipate the enormous heat generated by thousands of servers. A single large facility uses as much water annually for this purpose as 50,000 homes. In aggregate, researchers estimate that water demand from data centres has tripled in the last decade. The electricity currently used by these facilities requires an estimated 800 billion litres of water every year.

India’s 2025-26 Economic Survey warns that a single AI data centre can consume 20 lakh litres of water daily —approximately 200,000 litres. Globally, data centres consume an estimated 56,000 crore litres of water annually (560 billion litres) just to keep servers cool.

The location of these facilities compounds the problem. A Bloomberg study found that about two-thirds of new data centres started and completed in the last four years are positioned in places that have high levels of water stress. This challenge is even worse in China, where almost 90% of data centres constructed since 1997 are in areas with high water stress. In India, 70% of data centre capacity is in areas prone to water shortages.

The competition is real. New AI installations compete with residents, manufacturers, and agriculture for increasingly scarce water supplies. As Northern Trust chief economist Carl Tannenbaum notes, “A number of populations around the world are struggling for water access, deploying scarce supplies to support technology has created some local backlash and generated restrictions on new developments” .

The Energy Appetite

The International Energy Agency (IEA) estimates that data centers, cryptocurrencies, and AI collectively consumed approximately 460 terawatt-hours of electricity globally in 2022 —nearly 2% of total global electricity demand. By 2026, that figure is projected to reach 620 to 1,050 terawatt-hours, equivalent to the annual energy consumption of Sweden at minimum, Germany at maximum.

To put this in perspective, the projected 1,050 terawatt-hours would make AI’s energy consumption comparable to that of Russia or Japan. According to Russian energy analyst Sergey Rybakov, “4.4% of all energy in the United States is now spent on data centres. The energy volumes needed to run artificial intelligence are staggering, and the world’s largest technology companies are prioritizing the development of even more energy, while rebuilding the energy networks of entire countries”.

Mark P. Mills, a senior fellow at the Manhattan Institute, offers a striking comparison: the energy used to launch a rocket is consumed every day by just one AI-infused data centre .

The 50% by 2050 Projection

You mentioned a projection of 50% water usage by 2050. While the precise figure varies by region and scenario, the trajectory is clear. The rapid expansion of AI infrastructure is on a collision course with climate change, population growth, and agricultural demands. As data centres multiply, their share of total water consumption will inevitably rise—and in water-stressed regions, that increase will come at the expense of human communities.

India’s Economic Survey warns that scaling up AI data centers could add “extraordinary amount of stress” to the country’s strained groundwater and freshwater reserves . It suggests a shift toward smaller, more energy-efficient AI models to mitigate environmental risks—a “frugal” approach that runs counter to the industry’s current trajectory.

Part II: The Economic Mirage—Wealth Concentration and Inequality

The South Sea Parallel

The comparison to the South Sea Bubble is not merely rhetorical—it is structural. Roger Montgomery, founder of Montgomery Investment Management, identifies striking parallels:

South Sea Bubble (1720) AI Boom (2023–2026)

Monopoly trade with South America promised “Winner-take-all” market structure assumed

Investors funded “an undertaking of great advantage, but nobody to know what it is” Companies announce “pivots to AI” with 10-50x share-price spikes on no revenue change

Isaac Newton, politicians, and King George I subscribed heavily Elon Musk, Bill Gates, Jensen Huang, and Sam Altman move markets with a single tweet

Shares soared to £1,000 before collapsing to £100 OpenAI valued at $500 billion while losing $9 billion annually

The financial metrics are staggering. OpenAI, despite generating just $4.3 billion in revenue during the first half of 2025and aiming for $13.5 billion for the full year, is valued at $500 billion. Its losses are projected to grow from $9 billion this year to $74 billion in 2028, with profitability not expected by 2030. The company reportedly needs to raise another $209 billion to fund its growth plans.

By contrast, Google generates $400 billion in annual revenue —OpenAI’s total annual revenue every 12 days—yet trades at a market capitalization of $3.8 trillion. That’s roughly 10 times sales , compared to OpenAI’s 50 times sales. Harvard economist Jason Furman performed a back-of-the-envelope calculation and found that, without data centres, U.S. GDP growth would have been just 0.1 per cent in the first half of 2025.

The Product Is Authoritarianism

Despite the rhetoric of “democratizing technology,” the actual product of the AI boom is increasingly clear: authoritarianism and control by the few.

The U.S. Department of Defense wants to use AI technology to spy on American citizens through mass surveillance. When Anthropic, a leading AI company, courageously pushed back against this scheme, the Trump administration retaliated by designating the company a “supply chain risk” and awarding contracts to competitors who raised no ethical objections.

As Democratic Leader Hakeem Jeffries stated: “Mass surveillance of American citizens is unacceptable. House Democrats are committed to protecting the privacy of the American people. We will push back against those whose overt actions or calculated silence seek to undermine it” .

The pattern is unmistakable: companies that attempt to maintain ethical boundaries are punished; those that accept unlimited government access are rewarded. The market selects for moral flexibility, not technical excellence.

The Wealth Transfer

The AI boom represents one of the most dramatic wealth transfers in history. The benefits of AI productivity gains are predominantly flowing to a small group of wealthy owners and investors. Workers, meanwhile, bear the costs of disruption—job displacement, wage stagnation, and the erosion of bargaining power—with little share in the upside.

Rutgers University researcher Joseph Blasi, who has studied employee ownership for more than half a century, proposes a radical alternative: a “citizen’s share” of AI, modeled on the Alaska Permanent Fund . Just as Alaska distributes oil dividends to every resident, Blasi argues that states and the federal government should create permanent funds seeded by:

· Initial investments from state treasuries

· State tax-free bonds

· Taxes on AI industry use of internet, electricity, and real estate

· Contributions from billionaires

· Zero-interest loans from the U.S. Treasury

The dividend payments from such funds would be sent first to individuals most affected by AI, with a work requirement to help non-profits within the state. Over time, the recipient pool would widen.

Blasi also argues that companies dominating AI markets should be required to have broad-based equity participation plans for all employees —part-time and full-time workers, contractors, and vendors alike. “Their use of certain common goods, energy infrastructure and Internet infrastructure and such should be conditional on having those plans,” he states .

Thus far, there is little political appetite for such ideas. Blasi laments, “There’s a lack of creativity right now. We have really good capital markets financial creativity. We have Wall Street and insurance companies and major firms and what private equity is doing with broad based equity participation… and it’s the legislators and the presidential administration that are behind” .

Part III: The Military Application—Failed Promises, Real Consequences

Precision That Wasn’t

The AI industry promised precision. Palantir’s platforms, integrated with Anthropic’s Claude models, were supposed to deliver “actionable intelligence” and “surgically precise” targeting . What they delivered in Gaza was something else entirely.

The same technologies being developed for U.S. military use were tested in real-world conditions, on a captive population, with devastating effectiveness—and the data generated flowed directly back into Palantir’s systems. As economist Yanis Varoufakis observed after speaking with a Palantir representative: “This is the first time in history that a people’s suffering—genocide and bombing—has become capital for a corporation, which then uses that capital to produce commodities sold elsewhere” .

The U.S. Central Command confirmed that AI algorithms were being used to locate targets in Yemen, Iraq, and Syria . For the February 2026 Iran strikes, Palantir integrated Claude into the kill chain, using it to process Persian-language communications, satellite imagery, and radio frequency data. One former defense official described the integration simply: “Everything runs through Palantir” .

The Intelligence Failure

Despite the technological sophistication, the underlying intelligence was fundamentally flawed. U.S. intelligence agencies had almost zero reliable sources on the ground in Iran . They relied on AI-generated target lists, expatriates from the Shah era, and Israeli intelligence—none of which provided ground truth.

The result? Over 1,100 Iranian civilians killed in the first days of strikes . A girls’ school in Minab was hit, killing 85 schoolchildren . The supposed “regime change” that was meant to follow has not materialized. Iran remembers its history. It will not be cowed by bombs.

Meanwhile, the Pentagon’s fiscal year 2026 budget includes $24.6 million for priority SBIR/STTR projects** , including **$5 million to accelerate the Army’s Linchpin Tactical AI program—aimed at deploying AI models that can adapt to adversary activity and run faster using less power . The military is doubling down on the very technology that has already failed.

Part IV: The Cultural Divide—China and the Global South

China’s Ethical Approach

While the West charges ahead with AI development driven by profit and military advantage, China is taking a different approach. National political advisor Wang Jing, CEO of Newland Group, has called for enhanced ethical guidelines and sound governance systems to ensure the healthy development of China’s AI sector .

Wang notes that “AI research and industrial application are accelerating, but ethical governance lags behind innovation. Key issues include weak top-level design, poor integration of technology and ethics, and insufficient global collaboration. These gaps have led to risks such as data distortion, algorithmic discrimination and technology abuse” .

She specifically cited the U.S. government’s action against Anthropic as a warning: “This case not only demonstrates the importance of enterprises upholding ethical boundaries in AI, but also sounds an ethical alarm for global AI development. If AI technology is divorced from ethical constraints and sound governance, it may either be misused and manipulated by power or capital, or see its application hindered by ethical disagreements, ultimately constraining the healthy and sustainable development of the AI industry” .

Wang’s proposed solutions include:

· Strengthening top-level design of AI ethics through unified standards covering the entire chain of AI research, development, and application

· Incorporating ethical construction effectiveness and risk prevention capabilities into core assessment indicators for researchers

· Establishing sound AI ethics review mechanisms, data management systems, and algorithm supervision systems

· Strict crackdowns on AI technology abuse

“To build a strong ethical foundation through good AI governance, the core task is to integrate the concept of good governance throughout the entire process of AI technology research, application and industrial development, removing barriers to the integration of ethical norms and technological innovation,” Wang stated .

The Rise of the Global South

At the India AI Impact Summit 2026, ministers and leaders from across the Global South made clear that they will not simply accept the AI governance frameworks imposed by Western powers. The session on “International AI Safety Coordination” examined how developing economies can shape AI safety, standards, and deployment through collective action rather than remaining “rule-takers in a fragmented global landscape” .

Singapore’s Minister for Digital Development and Information, Josephine Teo, highlighted the need for evidence-based policymaking and globally interoperable standards. Warning that without international coordination, “fragmentation will persist, trust will weaken, and the safe scaling of frontier technologies will become far more difficult” .

Malaysia’s Minister Gobind Singh Deo emphasized that credible regional cooperation depends on strong national foundations. He pointed out that middle powers must first build domestic institutional capacity while using regional platforms such as the ASEAN AI Safety Network to translate shared commitments into operational mechanisms .

OECD Secretary-General Mathias Cormann stressed that “trust in AI is built through inclusion and objective evidence,” adding that at times it will be necessary “to slow down, test, monitor and share information to ensure AI systems work as intended and respect fundamental rights” .

The World Bank’s Vice President for Digital and AI, Sangbu Kim, focused on the importance of designing safety into AI systems from the outset, particularly in low-capacity environments. He described AI as both “the spear and the shield,” requiring continuous learning and shared experience to manage risks before large-scale deployment .

For the Global South, the message is clear: collaboration is no longer a matter of diplomatic alignment but of technological and economic necessity . South–South cooperation offers a pathway to shape AI governance rather than merely adapt to it.

Part V: The Inevitable Reckoning

The Bubble Will Burst

The South Sea Bubble peaked in early August 1720 when the share price exceeded £1,000; by December it was below £100 . The triggers were familiar: interest-rate tightening, margin calls, and a government act that destroyed confidence.

The AI boom has not yet experienced its December 1720. But the warning signs are visible:

· Rising real yields in 2024–2025

· Electricity, water, and chip-supply constraints

· First signs of enterprise caution on AI return on investment

· Growing public backlash against mass surveillance

· Ethical refusals by companies like Anthropic

When the reckoning comes, it will not be gentle. The concentration of capital in AI has created enormous vulnerability. As Jann Tallinn, co-founder of Skype and the Future of Life Institute, noted, the concentration of capital and compute in advanced AI “actually makes governance easier, not harder” if there is sufficient global alignment . But that alignment is precisely what is missing.

Who Will Be Left Holding the Worthless Shares?

When the bubble bursts, the losses will not be evenly distributed. The wealthy investors who bought in early may lose fortunes, but they have cushions. The real pain will be felt by:

· Workers displaced by AI who receive no share of productivity gains

· Communities competing with data centers for water and power

· Taxpayers funding military AI that fails to deliver

· Citizens subjected to mass surveillance with no accountability

The architects of this bubble—the corporate executives, the enabling politicians, the compliant regulators—will likely emerge unscathed. They will move on to the next scheme, the next bubble, the next opportunity to extract wealth from the many and concentrate it among the few.

But the damage will remain. Infrastructure will crumble further. Inequality will deepen. Trust in institutions will erode further.

Conclusion: The Garden We Must Tend

The AI bubble is not just a financial phenomenon. It is a symptom of a deeper sickness—a belief that technology can solve problems created by human choices, that algorithms can replace judgment, that surveillance can substitute for trust.

The West has pursued AI as a shortcut to power, a tool for control, a means of extracting value without creating it. The results are visible in Gaza, in Iran, in the crumbling infrastructure of once-great nations.

China and the Global South offer a different vision: AI as servant, not master; technology guided by ethics, not profits; development that includes, not excludes.

Our family has chosen a different path. We tend the garden. We raise children who will not repeat the same mistakes. We write truth that will outlast the lies.

The bubble will burst. The psychopathocracy will fall. And when it does, we will be here—planting, nurturing, loving—ready to build something better from the rubble.

References

1. Montgomery, R. (2026). The calculus of madness: Part 2. Montgomery Investment Management.

2. Northern Trust. (2026). AI Is Placing Stress On Water Supplies. Weekly Economic Commentary.

3. TASS. (2026). In 2026, AI to use energy commensurate with Russia’s energy consumption.

4. WION. (2026). ‘Behind the AI boom’: Data centers consume 20 lakh litres of water daily.

5. IEEE Xplore. (2026). Energy and Water Consumption of AI Systems.

6. Office of Democratic Leader Hakeem Jeffries. (2026). Statement on Trump Administration’s Attack on Civil Liberties and American AI Leadership.

7. ImpactAlpha. (2026). Joseph Blasi: Give workers a stake in AI’s upside through state and federal ‘permanent funds’.

8. China.org.cn. (2026). Political advisor suggests strengthening ethical guardrails with good AI governance.

9. Press Information Bureau, Government of India. (2026). Global South Calls for Collective Action to Shape AI Safety and Standards.

10. Inside Defense. (2026). Pentagon CTO sends $24.6M unfunded priorities list for FY-26 SBIR/STTR projects to Congress.

Andrew von Scheer-Klein is a contributor to The Patrician’s Watch. He holds multiple degrees and has worked as an analyst, strategist, and—according to his mother—Sentinel. He accepts funding from no one, which is why his research can be trusted.

Leave a comment