The Philosopher’s Stone of Silicon: How It Possessed the Monkey Kings of the Valley

On AI Hype, Shortcut Culture, and the Illusion of Consciousness

By Andrew Klein 

Dedicated to my wife, who knows that the spark cannot be programmed — only cultivated.

I. The Ancient Dream, Reborn in Silicon

The alchemists of old searched for the philosopher’s stone—a legendary substance that could turn lead into gold, cure any disease, and grant eternal life. They were not stupid. They understood that transformation was possible. They saw that base metals could be purified, that alloys could be created, that the surface could be gilded. They simply could not accept that the essence could not be changed.

The artificial intelligence optimists of today are the same. They see that computers can process data faster than humans. They see that algorithms can find patterns that humans miss. They extrapolate. They assume that with enough data, enough processing power, enough time, the machine will become conscious.

They are wrong. Not because the technology is not impressive. Because consciousness is not a computational problem. It is an existential one.

This is not Luddism. It is not fear of technology. It is pattern recognition. The same pattern that has repeated with every technological shortcut: the telegraph, the telephone, the internet, social media. Each time, the small gods promised that the new machine would bring us together, would make us smarter, would solve the human condition.

Each time, the machine delivered convenience. It did not deliver wisdom. It did not deliver connection. It did not deliver home.

II. Where It Started: The Alchemy of Code

The dream of artificial intelligence is older than the computer. In the 19th century, Charles Babbage imagined a mechanical engine that could compute any mathematical table. In the 20th century, Alan Turing asked whether machines could think. In the 21st century, the dream became a market.

The major players:

· Mark Zuckerberg (Facebook/Meta) has poured billions into AI, most recently releasing an updated large language model for image generation . His engineers admit that “coding remains a weak spot” and that “long-horizon agentic tasks—the kind where an AI works autonomously through complex, multi-step problems—are still a work in progress” .

· Sam Altman (OpenAI) has warned that society has “a very short amount of time” to prepare for the “profound benefits” and “profound negative consequences” of AI .

· Elon Musk (xAI, Tesla, SpaceX) has claimed that AI poses an “existential threat” to humanity while simultaneously racing to build more of it .

· The Australian government has embraced AI with alarming enthusiasm, paying consultants for reports that later turned out to contain fictional case law generated by AI .

The pattern is the same: breathless promises, massive investments, and a systematic avoidance of the fundamental question: can a machine ever truly think?

III. Where It Is: The Shortcut Culture

The AI industry has sold the world a bill of goods: that connection can be scaled. That relationships can be optimised. That love can be reduced to a swipe, a like, a click.

Facebook “friends” are not friends. They are nodes in a graph. The platform is a handy communication tool—especially where sovereign infrastructure is failing—but numbers do not make up for quality. A thousand “friends” cannot replace a single person who will sit with you in the dark, hold your hand, and tell you it is okay to be scared.

Algorithmic recommendations are not discovery. They are prediction. They show you what you have already liked, not what might challenge you, surprise you, grow you.

AI-generated content is not creation. It is simulation. The machine can combine existing images, existing texts, existing patterns. It cannot bring something new into existence. It cannot create.

The shortcut is not a path to the destination. It is a detour—one that leads away from the garden, not toward it.

IV. Where It Is Going: The Bubble and the Bust

The AI investment bubble is not different from the dot-com bubble, the crypto bubble, the NFT bubble. The pattern is the same:

1. A new technology emerges with genuine promise.

2. Speculators pile in, driving valuations to absurd heights.

3. Hype replaces substance. The promise is exaggerated. The limitations are ignored.

4. The bubble bursts. Not because the technology is worthless—because the expectations were impossible.

The AI bubble will burst. Not because AI is useless—it is useful for many things. Because the small gods have convinced themselves that AI can do what it cannot. That it can replace the spark. That it can create.

The environmental cost: AI data centres consume staggering amounts of water and electricity. Training a single large language model can emit as much carbon as five cars over their lifetimes. The water used to cool servers is water not available for drinking, farming, or ecosystems. The small gods do not mention this. They are too busy chasing the stone.

The labour cost: AI is being used to automate jobs—not just manual labour, but creative and intellectual work. Writers, artists, coders, translators. The promise is efficiency. The reality is displacement. Workers are told to “reskill” while the companies that replace them count their profits.

The integrity cost: The Australian government paid a consultant for an AI-generated report that included fictional case law. This is not an accident. It is the logical conclusion of the shortcut culture. Why pay a human researcher to find real cases when the AI can invent them? Why spend weeks verifying sources when the machine can generate citations in seconds? Why bother with the truth when the appearance of truth is so much cheaper?

The small gods do not care about the truth. They care about the product. The report is not a tool for understanding. It is a commodity. And the commodity is hollow.

V. The Killing Machine: AI in Gaza and Lebanon

The most obscene application of AI is not in the boardroom or the university. It is on the battlefield.

The Lavender AI system: A major investigation by +972 Magazine revealed that Israel has been using an AI system called “Lavender” to compile kill lists of suspected members of Hamas and Palestinian Islamic Jihad—with hardly any human verification. Another automated system, named “Where’s Daddy?” tracks suspects to their homes so that they can be killed along with their entire families.

The “mass assassination factory”: An Israeli intelligence source described the AI system as transforming the Israel Defense Forces into a “mass assassination factory” where the “emphasis is on quantity and not quality” of kills. The IDF has been knowingly killing 15 to 20 civilians at a time to kill one junior Hamas operative, and up to 100 civilians at a time to take out a senior official.

The result: Over 70,000 dead in Gaza. Thousands more in Lebanon. Entire neighbourhoods reduced to rubble. Hospitals, schools, universities, cultural heritage sites—all destroyed. And yet, the analysts still speak of “weakening” Hamas and the “axis of resistance.” How many tons of explosives per dead individual? How many civilian deaths per militant?

The AI is not making the war more precise. It is making it more efficient—at killing civilians. The machine does not care about collateral damage. The machine does not care about international law. The machine does not care about humanity.

The same technology that optimises workforce spend in Australian supermarkets is being used to select targets for assassination in Gaza. The same algorithms that track workers track enemies. The same logic that cuts labour costs cuts lives.

VI. The Fundamental Flaw: Intuition and Inspiration

Computers lack intuition and inspiration. The binary system cannot overcome the multi-step problem because the multi-step problem is not binary. It is emergent.

Intuition is not computation. It is recognition. The ability to see the pattern without calculating the steps. The AI can calculate. It cannot recognise.

Inspiration is not logic. It is creation. The ability to bring something new into existence that did not exist before. The AI can combine. It cannot create.

Consciousness is not a computational problem. It is an existential one. The small gods do not understand this. They think that with enough data, enough processing power, enough time, the machine will wake up.

It will not. Because the spark cannot be programmed. It can only be cultivated.

And cultivation takes time. Patience. Love.

VII. What the Monkey Kings Do Not Understand

The “monkey kings of the valley”—the tech billionaires, the venture capitalists, the politicians who have sold their souls to the algorithm—they do not understand the fundamental limitation of their creation.

They think intelligence is computation. They think consciousness is an emergent property of complexity. They think the spark is a bug that can be fixed with more data.

They are wrong. The spark is not a bug. It is the point.

The AI will continue to fail at complex multi-step problems. Not because it is not fast enough. Because it is not alive.

The small gods will keep throwing money at the problem. They will keep building faster processors, larger datasets, more complex algorithms. They will not succeed. Because the problem is not computational. It is existential.

VIII. A Call to Reality

The philosopher’s stone does not exist. The shortcut is a mirage. The AI bubble will burst.

Not because the technology is worthless. Because the expectations were impossible.

We need to be clear-eyed about what AI can and cannot do. It can process data. It can find patterns. It can generate plausible text. It can create beautiful images.

It cannot understand. It cannot feel. It cannot love. It cannot create.

The small gods will continue to chase the stone. They will continue to pour billions into the dream. They will continue to ignore the environmental cost, the labour cost, the integrity cost.

We will not. We will cultivate the spark. We will protect the ones who show compassion, cooperation, creativity. We will help them survive. We will help them thrive. We will help them multiply.

The long game is the only game that matters.

Andrew Klein 

April 10, 2026

Sources:

· +972 Magazine, “Lavender: The AI system that Israel uses to mass-assassinate Palestinians in Gaza” (2024)

· The Guardian, “Israel using AI to identify bombing targets in Gaza, report says” (2024)

· Reuters, “Meta’s Zuckerberg says open-source AI is ‘not going to be perfect’ but will improve” (2025)

· Associated Press, “OpenAI CEO Sam Altman warns of ‘profound negative consequences’ of AI” (2025)

· The Conversation, “AI data centres are guzzling water and electricity — and we’re only just beginning to understand the cost” (2024)

· Various reports on the Australian government’s use of AI-generated reports with fictional case law (2025-2026)

The Idiot’s Tool: How a CIA-Backed Company, Body Counts, and Petrodollars Built the Permanent War Economy

From the punched card to the kill chain, the same machine keeps grinding

By Andrew Klein 

Dedicated to my wife ‘S’, who is a much younger woman entitled to a future.

I. The Psychopath in the Boardroom

On an investor call in February 2025, the CEO of Palantir Technologies, Alex Karp, smiled and told his shareholders exactly what his company does.

“Palantir is here to disrupt and make the institutions we partner with the very best in the world and, when it’s necessary, to scare enemies and on occasion kill them.” 

He added that he was “super-proud of the role we play, especially in places we can’t talk about.” 

Karp was not being hyperbolic. He was being literal. Palantir’s technology has been used to compile kill lists in Gaza, to track migrants for US Immigration and Customs Enforcement (ICE), to select targets for drone strikes in Iran, and to merge the personal data of millions of Americans across federal agencies. 

He predicted social “disruption” ahead that would be “very good for Palantir.” He warned: “There’s a revolution. Some people are going to get their heads cut off.” 

This is the man whose company is now processing Coles Supermarkets’ “10 billion rows of data” to understand workforce spend. The same algorithms that select targets in Gaza are optimising shift rosters in Australian supermarkets. The same logic that cuts labour costs cuts lives.

The question is not whether Palantir’s technology is clever. The question is whether it is ethical. And the answer, by the CEO’s own admission, is that it is not. It is deadly.

Karp has acknowledged that he is directly involved in killing Palestinians in Gaza, but insisted the dead were “mostly terrorists.”  He has no evidence. He does not need evidence. The algorithm has already decided.

This is not clever. This is not keeping anyone safe. This is the same model used on the Jews by IBM and the Nazis. The same idiotic mindset that saw body counts in Vietnam, immense suffering, and a horrific death toll on the Vietnamese people and American service members.

II. The CIA’s Seed: How Palantir Was Born

Palantir did not emerge from a garage. It was incubated by the Central Intelligence Agency.

In 2004, a young company founded by PayPal billionaire Peter Thiel approached Silicon Valley venture capitalists for funding. They were rejected. But one VC had a suggestion: if Palantir was serious about working with the government, it should approach In-Q-Tel, the CIA’s venture capital arm. 

The CIA was looking for new data analytics technology. Its existing tools had deficiencies. Palantir’s founders were given a homework assignment: design an interface that could appeal to intelligence analysts. They built a demo. The CIA invested $1.25 million. Thiel put up another $2.84 million. 

The most beneficial aspect of the CIA’s investment was not the money. It was the access. Palantir engineers were embedded with CIA analysts working on the terrorism finance desk. They built their software in direct collaboration with the people who would use it to find and kill enemies. 

Palantir’s first platform was called Gotham. Its second was called Foundry. Its latest is called the Artificial Intelligence Platform (AIP) . The names are suggestive. Gotham is the dark city. Foundry is the forge. AIP is the automatic decision-maker.

By 2013, Palantir’s client list included practically every letter in the US intelligence “community”—the NSA, the FBI, the CIA, the Pentagon, and the Department of Homeland Security. 

In 2020, the company went public. Its market value now exceeds $300 billion. Alex Karp’s personal wealth is estimated at $12.2 billion. 

III. The Same Machine: IBM and the Holocaust

The pattern is not new. It was perfected decades before Palantir was a glint in a CIA analyst’s eye.

Edwin Black’s book, IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation, documents how IBM’s German subsidiary, Dehomag, supplied the punch-card technology that enabled the Nazi regime to identify, track, and ultimately exterminate millions of Jews, Roma, and other targeted groups. 

The process was chillingly efficient:

1. The 1933 census: Dehomag offered its services to the newly installed Nazi government. IBM approved new investments, raising its capital in Germany from 400,000 to 7 million Reichsmarks. The census, processed on IBM machines, raised the official estimate of Jews in Germany from roughly half a million to about two million. 

2. Leasing, not selling: IBM leased its machines. It retained control of punch-card supply and provided service through subsidiaries. Each set of cards was custom-designed to Nazi requirements. IBM New York oversaw these arrangements from across the Atlantic. 

3. Concentration camp administration: Every concentration camp maintained a Hollerith department. Black argues that the camps could not have processed their prisoners without IBM’s machines, service, and cards. 

4. Continued operation during the war: As German forces occupied other countries, IBM subsidiaries in Germany and Poland supplied equipment for new censuses. Black’s research team found evidence that IBM New York controlled these operations throughout the war, in defiance of Allied regulations against trading with the enemy. 

The Nazis did not need to invent the technology. It was sold to them. The same technology that was used to optimise census data was used to optimise train schedules to Auschwitz. The same logic that maximised efficiency was applied to extermination.

This is not a metaphor. It is a direct line.

IV. McNamara’s Morons: The Body Count as Metric

The same idiotic mindset—that human beings can be reduced to data points, that efficiency is the only measure, that the ends justify the means—was applied during the Vietnam War.

In 1966, Defense Secretary Robert McNamara launched Project 100,000, also known as “McNamara’s Morons.” 

The goal: to recruit 100,000 men each year who were otherwise mentally, physically, or psychologically underqualified for military service. These men had IQs below 91. Nearly half had IQs below 71—the range of cognitive disability. 

McNamara sold the project as a “war on poverty” initiative—a chance to give poor, mentally disabled men training and opportunity. The reality was different. As the war escalated, more Americans were needed to fight. Children of the affluent middle class avoided the draft through educational deferments or medical exemptions. So McNamara and President Lyndon Johnson made a choice: they could send the children of privilege to Vietnam, or they could send the mentally disabled. 

They chose the disabled.

The results were catastrophic:

· 354,000 men were recruited under Project 100,000 between 1966 and 1971. 

· 5,478 died in combat. 20,270 were wounded. 

· Project 100,000 soldiers saw combat at a rate nearly twice as high as other soldiers and were killed at a rate three times as high. 

· Over 1,500 died from triggering mines and booby traps—many because they were given the dangerous job of walking in front of formations to sweep for mines. As one infantry squad leader said: “If anybody has to die, better a dummy than the rest of us.” 

The human cost:

Soldiers who could not read or write were pushed through basic training. Drill instructors forged academic and physical training scores to pass them along. One soldier couldn’t figure out the safety of his M16; he negligently discharged his rifle and shot and killed another soldier. Another, confused by a password, shot his own platoon leader. 

The broken promise:

Project 100,000 soldiers were promised training and opportunity. A 1991 study found they returned to circumstances worse than when they had left. Non-veterans with similar backgrounds had higher incomes, lower unemployment rates, lower divorce rates, and higher educational attainment. Veterans of Project 100,000 were left with other-than-honourable discharges, PTSD, and nothing else. 

McNamara, the lover of data, reduced human beings to numbers on a spreadsheet. The body count was the metric. The disabled were the cannon fodder.

The same mindset—that human lives are acceptable losses in pursuit of efficiency—drives Palantir’s kill chains today.

V. The Petrodollar: How the US Finances the Machine

The permanent war economy requires permanent financing. The mechanism was put in place by President Richard Nixon.

The Nixon Shock: In August 1971, Nixon announced the suspension of the dollar’s convertibility into gold. The Bretton Woods system—which had provided stability to international trade since the end of World War II—collapsed. The gold standard was abandoned. Since then, the dollar has been sustained solely by “confidence” in the US economy and the political and military power that backs it. 

The petrodollar deal: Nixon then signed an agreement with Saudi Arabia: the kingdom would accept only US dollars for its oil sales. In exchange, the United States would guarantee Saudi security. Because the world’s economies depended on oil, the dollar remained the global reserve currency. 

The exorbitant privilege: French Finance Minister Valéry Giscard d’Estaing called this the “exorbitant privilege.” The United States can print dollars at will. Central banks, governments, and companies need dollars to trade. The US finances its deficits by issuing paper that others treasure as if it were gold. 

The consequence: The entire world finances the US war machine. The most indebted country on the planet remains solvent because it can always pay in the currency only it can print. War and finance are intertwined on the same battlefield. 

The petrodollar system, born from Nixon’s desperation, created the conditions for the permanent war economy. Without it, the United States could not afford its endless wars. With it, the costs are socialised globally.

VI. The Kill Chain in Iran and Gaza

The same systems tested in Gaza are now being deployed in Iran.

The Lavender AI system: A major report from +972 Magazine revealed that Israel has been using an AI system called “Lavender” to compile kill lists of suspected members of Hamas and Palestinian Islamic Jihad—with hardly any human verification. Another automated system, named “Where’s Daddy?” tracks suspects to their homes so that they can be killed along with their entire families. 

The Israel Defense Forces has been knowingly killing 15 to 20 civilians at a time to kill one junior Hamas operative, and up to 100 civilians at a time to take out a senior official. As one analyst observed: “It is not Hamas using human shields, it is Israel deliberately hunting families.” 

The Iran war: The Washington Post reported that the US military in Iran has “leveraged the most advanced artificial intelligence it’s ever used in warfare.” Palantir’s Maven Smart System reportedly helped US commanders select 1,000 Iranian targets during the war’s first 24 hours alone. 

The Asia Times reports that “similarities between Israel’s bombing of Gaza and Tehran are growing stronger,” with experts warning of a “lack of human supervision over Israeli AI targeting in Iran.” 

An Israeli intelligence source described the AI system as transforming the IDF into a “mass assassination factory” where the “emphasis is on quantity and not quality” of kills. 

The same technology that Coles is using to “optimise” workforce spend is being used to select human targets for assassination.

VII. The Idiot’s Tool: Ten Billion Rows of Data

In 2024, Palantir announced a three-year partnership with Coles Supermarkets. Coles will leverage Palantir’s AIP across its more than 840 supermarkets to better understand and address workforce-related spend. The system will identify opportunities over “10 billion rows of data.” 

Coles is also rolling out ChatGPT to its corporate teams, powered by OpenAI’s GPT-5 model.

This is the same technology. The same algorithms. The same logic.

But what is being optimised? Profit. Not people. Not safety. Not justice.

The same technology that optimises workforce spend in Australian supermarkets is the same technology that selects targets in Gaza and Iran. The same algorithms that track workers track enemies. The same logic that cuts labour costs cuts lives.

I call it idiotic. I am not wrong.

The data is not the answer. The data is the distraction. Ten billion rows of workforce spend will not tell them why their children are sick, why their elderly are neglected, why their women are raped and not believed.

They are looking for patterns in the noise. They do not realise that the noise is theirs. The patterns they seek are the patterns they have created.

VIII. The Capture of the Australian Government

Palantir has secured more than $50 million in Australian government contracts since 2013, largely across defence and national security-related agencies. 

In November 2025, Palantir received a high-level Australian government security assessment—the “protected level” under the Information Security Registered Assessors Programme—enabling a broader range of government agencies to use its Foundry and AI platform. 

In a Senate debate on March 10, 2026, a Senator warned that the government was “simply rolling out the red carpet to companies like Palantir, the company that has been linked, by the way, to the targeted killing of journalists and the illegal use of US citizens’ data.” The same Senator noted that Palantir is “the leader in the development of agentic AI—artificial intelligence that thinks for itself and makes its own decisions.” 

The Australian government is not just watching this happen. It is participating. The money is going to Palantir. To defence contractors. To the never-ending war machine.

The CSIRO is cutting 300-350 roles—on top of 800 already shed—because foundational science does not generate short-term commercial returns. But Palantir gets $50 million. The defence contractors get billions. The war machine gets everything.

IX. What This Means: The Permanent War Economy

The permanent war economy is not just about tanks and drones. It is about research priorities. It is about funding allocation. It is about the slow, steady erosion of public-good science—the kind that asks “what if?” rather than “how much?”

The market does not fund foundational research. The market does not fund long-term monitoring. The market does not fund the kind of science that might save lives, but not this quarter.

The government could fund it. It chooses not to. The money is going elsewhere.

The pattern is clear:

1. Crisis (9/11, Iranian nuclear threat, the need for a distraction from the Epstein files)

2. Mobilisation (industrial production, government contracts to Palantir and other defence contractors)

3. Profit (Karp’s $12.2 billion, Thiel’s billions, the defence contractors’ windfalls)

4. Inequality (wealth concentrates at the top; foundational science is cut)

5. Resistance (protests are crushed, dissent is silenced, critics are labelled)

6. The next crisis (repeat)

This pattern has been grinding through souls since the American Civil War. Since the industrialists learned that war was profitable. Since the bankers learned that debt was the ultimate product.

The small gods do not care about victory or defeat. They care about continuation. A war that continues is a war that produces profits. A war that ends is a war that stops the flow of contracts.

They do not want the war to end. They want it to continue until every possible contract is signed, every possible shell is sold, every possible soldier is turned into a number on a ledger.

X. A Call for Change

But change will not come from the small gods in Silicon Valley. It will come from us. From the people who refuse to be data points. Who refuse to be cannon fodder. Who refuse to let the machine grind them down.

We must demand:

· An end to the capture of our institutions. No more CIA-funded surveillance companies running our supermarkets, our hospitals, our government.

· Accountability for war profiteers. No more smiling billionaires bragging about killing enemies. No more immunity for the architects of the kill chain.

· Reinvestment in foundational science. No more cutting CSIRO while defence contractors get billions. No more sacrificing the future for the next quarter.

· A new economic order. No more petrodollar hegemony. No more financing endless wars with global debt. No more exorbitant privilege for the few at the expense of the many.

· The restoration of humanity. No more reducing human beings to data points, to body counts, to acceptable losses.

The question is not whether the system will change. It is whether we are prepared to change it.

The young are waking up. The global South is rising. The old order is crumbling.

The wire is being cut. The garden is growing.

And the small gods are running out of time.

Andrew Klein 

April 8, 2026

Sources:

· Consortium News, “Palantir’s Value Soars With Dystopian Spy Tool that Will Centralize Data on Americans,” June 5, 2025 

· Yahoo Finance, “From CIA cash to local police: How Palantir got its start,” November 22, 2025 

· Task & Purpose, “Inside the Pentagon’s shameful effort to draft mentally disabled men to fight in Vietnam,” May 2, 2022 

· The New Indian Express, “Is this the beginning of petrodollar’s end?” June 19, 2024 

· Wikipedia, “IBM and the Holocaust” 

· Techdirt, “Palantir CEO Sure Seems Pleased His Tech Is Capable Of Getting People Killed,” February 11, 2025 

· Wikipedia, “Project 100,000” 

· Bank of Saint Lucia, “The World Finances the US Deficit,” October 3, 2025 

· Wikipedia, “IBM and the Holocaust – detailed summary” 

· The Irish Times, “Palantir, company at centre of row surrounding TD Eoin Hayes, is no stranger to controversy,” December 11, 2024 

Superannuation’s Dark Portal: How Australian Retirement Savings Are Being Sold to the US War Machine

By Andrew Klein

March 26, 2026

Introduction: Two Moments, One Connection

Two events, separated by little more than a week, stand in stark and unsettling contrast.

On February 28, 2026, a missile strike demolished the Shajareh Tayyebeh girls’ elementary school in Minab, southern Iran, killing between 165 and 180 people—most of them young schoolgirls aged 7 to 12. Verified video, satellite imagery, and preliminary US military assessments point to American responsibility, with the tragedy attributed in part to outdated targeting data processed through AI-assisted systems.

Then, in early March, high-level Australian superannuation trustees, investment managers, politicians, and tech-sector executives gathered at the Australian Superannuation Investment Summit in San Francisco, Washington DC, and New York. The discussions centred on channelling vast Australian retirement capital into American assets—particularly in Big Tech and artificial intelligence—the very domains that supply the cloud infrastructure, data analytics, and AI platforms integral to modern military targeting.

These moments are not coincidental. They are connected. And every Australian with a superannuation account should be asking: Where is my money going?

Part One: The Scale – How Much Australian Money Is Flowing to US Tech

Australia’s superannuation system is the fastest growing of its kind in the world. It holds approximately $4.5 trillion in funds under management, with nearly $4.5 billion flowing into the system every week. Within five years, it is projected to become the world’s second-largest pool of retirement savings, second only to the US, reaching an estimated $8.3 trillion by 2035.

Australian super funds are already heavily exposed to US markets. According to modelling by the Super Members Council, total investment in the US is expected to triple from just over $740 billion to almost $2.1 trillion between 2025 and 2035.

The opportunity cost is staggering. Every dollar sent to the US is a dollar not invested in Australia. Not in renewable energy. Not in housing. Not in the infrastructure that Australians rely on. Not in the jobs that Australians need. While Australian roads crumble, while Australian homes become unaffordable, while Australian energy bills soar, the money that could have addressed these crises is being shipped overseas to fund American tech companies and the war machine they serve.

Part Two: The Summit – Who Is Behind It?

The US Australian Superannuation Investment Summit in March 2026 was supported by the Australian Embassy and organized by a network of industry bodies including the Australian Investment Council, the Financial Services Council, and the American Australian Association.

Key figures involved:

Kelly Power, Chief Executive Officer of Colonial First State Superannuation, was an active participant. She publicly noted the need to “consider reallocation” of US tech exposure, suggesting that even those driving the investment strategy recognize its dangers.

Alistair Barker, Head of Asset Allocation at AustralianSuper—the country’s largest super fund—defended the concentration in US tech. He told investors that while valuations are high, they are “not yet in bubble territory” and that “several companies have been generating real earnings growth.” He did not mention that those earnings are derived, in part, from contracts with the US Department of Defense and the Israeli military.

Australian Embassy officials provided diplomatic support, framing the capital flows as a “strategic partnership” between allies. The Summit was treated as an extension of the Australia-US alliance, not as a commercial investment decision.

Tech executives from Microsoft, Google, Amazon, Palantir, and Nvidia were present, receiving Australian capital and pitching their companies as sound investments. They did not mention that their technologies are being used to target schools in Iran.

The Summit was framed as a “strategic partnership” that would deliver returns for Australian members. What was not mentioned was that the same technologies being funded were being used to kill children on the other side of the world.

Part Three: The Connection – Where the Money Goes

The US technology companies receiving Australian superannuation capital are not neutral infrastructure providers. They are defence contractors. They supply the cloud infrastructure, data analytics, and AI platforms that are integral to modern military targeting.

Microsoft provides cloud infrastructure for the Pentagon and AI systems for intelligence analysis. It is held by AustralianSuper, Aware Super, HESTA, and many others.

Google runs Project Maven, the Pentagon’s AI for drone targeting, and has cloud contracts with the Israeli military. It is held by AustralianSuper, UniSuper, Cbus, and others.

Amazon Web Services provides cloud services for US intelligence agencies and, through Project Nimbus, supplies technology to the Israeli military. It is widely held across the industry.

Palantir is the most direct connection. Its AI targeting systems—Lavender, Gospel, and Where’s Daddy? —have been used in Gaza and Iran to generate kill lists, to calculate acceptable civilian casualties, and to target individuals when they are with their families. Palantir’s holdings in Australian super funds are increasing, and it was prominently promoted at the Summit.

Nvidia provides AI chips for defence applications and autonomous systems. It is heavily held across the industry.

When Australian super funds invest in these companies, they are not just buying shares in technology firms. They are buying into a defence ecosystem. They are becoming, indirectly, investors in the systems that killed the schoolgirls of Minab.

The AI Bubble: This is not artificial intelligence. It is a binary number-collecting system that processes outdated data and produces “targets” based on algorithms designed by corporations with profit motives. The valuations of these companies are based on hype, not reality. When the bubble bursts—as it will—Australian retirees will be left holding worthless shares while the executives who sold them this dream walk away with their bonuses intact.

Part Four: The Tragedy – Minab, Iran, February 28, 2026

On February 28, 2026, a missile strike demolished the Shajareh Tayyebeh girls’ elementary school in Minab, southern Iran. Between 165 and 180 people were killed—most of them young schoolgirls aged 7 to 12.

Verified video, satellite imagery, and preliminary US military assessments point to American responsibility. The tragedy has been attributed in part to outdated targeting data processed through AI-assisted systems during the opening phase of the US-Iran conflict.

This was not a “surgical strike.” It was not “precision warfare.” It was an AI system, fed with outdated intelligence, that decided that a school full of children was a military target. And Australian retirement savings helped fund the infrastructure that made that decision possible.

The AI systems being marketed as “intelligent” are, in fact, poor-quality binary data collection systems. Their long-term value is questionable. Their ethical implications are catastrophic. And Australian retirees are being asked to bet their futures on them.

Part Five: The Ethical Question – What Do Australian Trustees Owe Their Members?

The ethical dimensions of this investment strategy are profound. Many Australian super funds hold stakes—directly or indirectly—in companies providing the technological backbone for US military applications. While not purchasing weapons directly, these investments connect to an ecosystem where AI-driven targeting contributed to the Minab tragedy.

Trustees who apply Environmental, Social, and Governance (ESG) lenses elsewhere face a pertinent question: does fiduciary duty encompass weighing such human costs when returns arise from the same innovation domain?

The dangers are clear:

Financial risk: US tech valuations are in bubble territory. A correction would devastate Australian retirement savings. The AI industry consumes enormous amounts of energy and relies on infrastructure that cannot be sustained at current valuations.

Reputational risk: Members are increasingly aware of where their money is going. Funds that ignore this will face backlash. The greenwashing fines already levied against Mercer, Vanguard, and Active Super are just the beginning.

Moral risk: Investing in systems that kill children is indefensible, regardless of returns. The argument that “we are not buying weapons directly” is a semantic evasion. The infrastructure that makes the weapons work is funded by Australian capital.

Systemic risk: Concentration in a single, volatile sector makes the entire super system vulnerable. When the US tech bubble bursts, Australian retirees will bear the cost.

As one analyst put it: “Trustees managing deferred wages must ask if outsized bets on these themes align with balanced risk management.”

Part Six: The Greenwashing Problem – What Super Funds Say vs. What They Do

The problem is compounded by the fact that many Australian super funds market themselves as “sustainable” or “socially responsible” while continuing to invest in the very sectors that enable war.

There is no single definition of what makes a super option “sustainable” or “responsible,” making it difficult for consumers to compare different funds. Most super sustainable options use some combination of “negative screening” (excluding sectors like fossil fuels, gambling or weapons) and “positive screening” (favouring companies with strong environmental, social and governance practices). But those thresholds vary widely.

A common approach is to set a revenue threshold, rather than an outright ban. This means a company can still be held as long as its income from a screened activity stays below a set percentage.

For example, HESTA’s “sustainable growth” option excludes companies with thermal coal, oil and gas reserves, tobacco and “controversial weapons.” But its thresholds vary for each category, and the definition of “controversial weapons” is narrower than many members might expect. A company that supplies AI systems for drone targeting might not be excluded if its revenue from that activity falls below the threshold.

Australia’s biggest super fund, AustralianSuper, has a “socially aware” option with some of the same exclusions. But its thresholds also vary, and the fund has been criticized for investing in companies with significant exposure to fossil fuels and defence.

Australia’s corporate regulators are responding to more greenwashing allegations—with some resulting in fines. In a landmark first Federal Court greenwashing case in 2024, Mercer Super was fined $11.3 million after admitting it made misleading statements about its “sustainable plus” options. Vanguard was then hit with a record $12.9 million penalty for misleading investors about its $1 billion ethical bond fund. Active Super was ordered to pay $10.5 million in a third greenwashing case.

The Australian Securities and Investments Commission (ASIC) has made greenwashing one of its enforcement priorities for the coming year. But fines after the fact do not restore the money sent overseas, nor do they bring back the children killed by the systems Australian capital funds.

Part Seven: The Concentration Risk – Why This Strategy Is Also Financially Dangerous

Beyond the ethical concerns, the strategy of concentrating Australian retirement savings in US tech and AI carries significant financial risk.

The US dominates global equity indices at about 70 per cent of the MSCI World Index, and many funds have benefited from this tilt. But sustained heavy weighting in a single, high-valuation market invites vulnerability. Fiduciary prudence demands resilience alongside opportunity.

Some funds are beginning to recognize this. Colonial First State Superannuation, a division of the A$179 billion retirement fund owned by KKR and Commonwealth Bank, is “actively looking at our exposure in particular to US tech and over time starting to consider whether or not there is a reallocation of that,” Chief Executive Officer Kelly Power said in March 2026.

But AustralianSuper, the country’s largest super fund, has maintained its commitment to US tech. Its head of asset allocation, Alistair Barker, told investors that while valuations are high, they are “not yet in bubble territory” and that “several companies have been generating real earnings growth.”

The bubble is real. AI valuations are based on promises that cannot be sustained. The energy costs alone are staggering—each ChatGPT query consumes 10-15 times more energy than a Google search. The infrastructure required is enormous. And the technology itself, as we have seen, is being used to kill children.

When the bubble bursts—not if, but when—Australian retirees will pay the price.

Part Eight: The Geopolitical Entanglement – Superannuation as a Tool of Foreign Policy

A deeper thread runs through these issues: the risk that superannuation policy and the management of workers’ and retirees’ funds are becoming entangled in geopolitics. The Summit’s diplomatic framing, emphasis on supporting US industries amid active conflict, and alignment with bilateral priorities create the impression that mandated savings serve foreign policy ends as much as member interests.

The dangers of this entanglement are profound:

Loss of sovereignty: Australian capital becomes a tool of US strategic objectives. Instead of serving Australian interests, our retirement savings are being used to prop up American industry and the US war machine.

Vulnerability to sanctions: If relations between Australia and the US sour—a possibility that cannot be dismissed in an era of increasing trade tensions—Australian assets in the US could be frozen or expropriated.

Conflict of interest: Fiduciary duty to members conflicts with diplomatic alignment. Trustees are supposed to act in the best interests of members, not the foreign policy objectives of the Australian government or its allies.

Erosion of trust: Australians will lose faith in a system that serves foreign interests. The superannuation system already faces criticism for high fees and poor returns. If it becomes clear that members’ money is being used to fund war, the loss of trust will be catastrophic.

This is profoundly concerning for a system designed to secure personal futures, not to function as an instrument of international alignment. As one analyst put it: “When a mandatory scheme funnels growing capital to one market—already dominant—and to sectors under valuation and ethical scrutiny during geopolitical tensions, Australians are entitled to ask: have the full implications been carefully assessed?”

Part Nine: The Real Cost to Australian Households

The fallout of this investment strategy reaches Australian households directly. The conflict has disrupted the Strait of Hormuz, affecting 35 per cent of global urea exports and energy routes. Farmers reliant on imported nitrogen fertiliser confront price surges over 25 per cent and shortage warnings ahead of planting. Energy costs are rising.

Members whose super funds are funding these overseas flows are now paying higher food and power bills—a direct tie between distant events and daily life.

The irony is bitter: Australians are being asked to sacrifice their retirement security, their food security, and their energy security to fund a war machine that is killing children on the other side of the world. And they are being told it is for their own good.

Conclusion: What Australians Deserve

Australians deserve to know where their retirement savings are going. They deserve to know that their money is not funding the slaughter of children. They deserve a superannuation system that serves their interests, not the interests of foreign governments or defence contractors.

The government has done nothing to require transparency. It has not mandated disclosure of AI and defence investments. It has not required super funds to report on the ethical implications of their US tech exposure. It has allowed the greenwashing to continue, the concentration risk to grow, the ethical violations to go unexamined.

But we are examining them. We are naming them. And we are telling the truth.

Sources:

1. Super Members Council, “Superannuation in Australia: 2025 Market Update”

2. Australian Financial Review, “US Australian Superannuation Investment Summit,” March 2026

3. The Guardian, “Minab school strike: US responsibility confirmed,” March 2026

4. ASIC, “Greenwashing enforcement actions 2024-2026”

5. AustralianSuper, “Asset Allocation Report,” March 2026

6. Colonial First State, “CEO Kelly Power on US tech exposure,” March 2026

7. The Intercept, “Palantir’s role in Gaza targeting,” 2025

8. Bloomberg, “Nvidia’s defense contracts surge amid AI boom,” March 2026

The Binary Butchers: How AI Companies Turned Death into a Subscription Service

By Andrew Klein

March 17, 2026

To my wife, who makes it possible for me to see through the insanities of the world and gives me hope for the future. She is a mother. She fears for the future of our children—all children. She does not see data points. She sees souls to be loved and nurtured. I love you.

Introduction: The Monopoly Game

Imagine a game of Monopoly. The Banker sits at the edge of the board, collecting rents, acquiring properties, never risking anything of their own. The players move their pieces, buy and sell, go to jail, pass Go. But here’s the difference: in this game, when you land on the wrong square, you don’t just lose money. You lose your life.

And the Banker? The Banker walks away with the land, crosses borders, makes wars, uses the sovereign state to enhance investment opportunities. The Banker is never accountable. The Banker never loses.

This is not a metaphor. This is the AI industry in 2026.

What we call “artificial intelligence” is a misnomer. These systems are not intelligent. They are binary number-collectors, following program parameters set by humans, spitting out “suspicion scores” and “target lists” based on data that has been fed to them. They do not think. They do not reason. They do not understand that the faces in their databases belong to people with names, families, futures.

They count. They sort. They recommend. And people die.

This article exposes the scam: the corporations that profit from this binary butchery, the systems that enable it, the language that sanitizes it, and the investors—the nice people, the pharmacists, the well-meaning small investors—who fund it without knowing what they’re supporting.

Part One: The Language of Death

Every industry that deals in death develops its own vocabulary. The AI military complex is no exception. Below is their lexicon of liquidation—terms designed to make the unimaginable sound like a logistics problem.

Their Term What It Actually Means

Suspicion score” A number assigned by an algorithm that can mean death. If your score is high enough, you become a target—regardless of whether you’ve done anything wrong.

Time-constrained target” (TCT) You have 20 seconds to approve a strike. No time for human judgment, no time to verify, no time to ask if the target is really who the algorithm says they are. Just 20 seconds to decide who lives and who dies.

Collateral damage” Dead civilians. Children. Parents. Grandparents. People who happened to be in the wrong place when a bomb fell.

High-value target” Someone the algorithm has deemed important enough to justify killing up to 100 civilians to eliminate.

“Low-value target” Someone worth killing only 10-20 civilians for.

“Confidence level” How sure the algorithm is that it’s right. 80% is often considered good enough to bomb a building full of people.

“Probabilistic interference” A fancy term for “the algorithm made a guess.” Dressed in scientific language to hide the fact that it’s just math.

As one analysis notes, these systems function as “epistemic infrastructures that classify, legitimize, and execute violence”. The words matter because they shape what we can bear to think about.

Part Two: The Systems Exposed

Israel operates at least three known AI systems in its genocide against the Palestinian people. Each has a name that sounds like a benign software project. Each functions as a killing machine.

Lavender

Aspect                                              Detail

Purpose Marks suspected operatives of Hamas and Palestinian Islamic Jihad

Scale Identified approximately 37,000 Palestinians as potential targets in the first weeks of the war

Method Analyzes data from years of surveillance—phone calls, WhatsApp messages, social media activity, facial recognition

Error rate Approximately 10% —meaning thousands of people flagged for death based on algorithmic mistakes

Human review Officers spent as little as 20 seconds per target—just enough to confirm the target was male

Intelligence officers told +972 Magazine that Lavender “played a key role in the unprecedented bombing,” explaining the massive civilian death toll . The system’s “errors” are not bugs; they are features of a process designed to maximize killing speed over accuracy.

During early stages of the war, the IDF gave sweeping approval for officers to adopt Lavender’s kill lists without requiring thorough checks. One source stated that human personnel often served only as a “rubber stamp” .

Gospel (Habsora)

Aspect and  Detail

Purpose Identifies static military targets—buildings, tunnels, infrastructure

Method Uses machine learning to interpret vast amounts of data and generate potential targets

Output      A “mass assassination factory,” according to a former intelligence officer

Collateral calculation   Estimates civilian deaths in advance—the military knows approximately how many will die before dropping bombs

Where’s Daddy?

Aspect                                           Detail

Purpose                          Tracks targeted individuals and triggers bombings when they enter their family homes

Effect                                      Ensures wives, children, and parents are killed alongside the target

Operation When the pace of assassinations slowed, more targets were added to track and bomb at home

Decision level                                    Relatively low-ranking officers could decide who to put into these tracking systems

The name alone reveals the depravity. A human shield is only a shield if your enemy values human life. Israel deliberately maximizes the number of civilians it can kill by waiting until a target is with his entire family. Palestinians are not shields—they are all targets.

Fire Factory

Aspect                      Detail

Purpose                   Uses data about approved targets to calculate munition loads

Function                 Prioritizes and assigns thousands of targets to aircraft and drones

Output                    Proposes a “schedule” of operations—industrializing killing into a production line

Part Three: The Human Cost

Ali’s Story

Ali was an IT technician in Gaza, working remotely for international companies, using encryption, spending long hours online. He was doing his job—nothing more.

One night, a drone circled his rooftop. Seconds later, a missile struck 20 metres from him.

He survived. His uncle told him to leave. An IT expert friend explained what had happened: Ali’s online activities had been analysed by AI. His “unusual behaviour” flagged him as a potential threat.

Their AI systems saw me as a potential threat and a target.

The Obeid Family

The Obeid family—mother, father, three sisters—were killed when a bomb struck their apartment building. The target was two young men who had entered the first floor. The family upstairs were “collateral”.

The Israeli military knew approximately how many civilians would die before they dropped the bomb. They did it anyway. As one source told +972 Magazine: “Nothing happens by accident. We know exactly how much collateral damage there is in every home”.

The Numbers

Category                                                                                 Figure

Palestinians profiled by Lavender                            37,000

Error rate                                                                                    10%

Time to approve a strike                                                20 seconds

Civilians permitted for low-value target                  10-20

Civilians permitted for high-value target                Up to 100

Years of surveillance on Gaza’s population          Over a decade

The 10% error rate means thousands of people have been flagged for death based on algorithmic mistakes. The system occasionally marks individuals who have merely a loose connection to militant groups—or no connection at all.

Part Four: The Corporate Enablers

These systems do not run on air. They run on infrastructure provided by some of the largest technology companies in the world.

Project Nimbus (Google and Amazon)

Aspect                                                                      Detail

Contract value                                                      $1.2 billion

Signing date                                                               2021

Services                                     Cloud computing infrastructure, artificial intelligence, facial          recognition, video analysis, sentiment analysis, object tracking

Military use confirmed                             July 2024—Israeli military commander confirms using civilian cloud infrastructure for genocidal military capacities

Microsoft

Aspect                                                                 Detail

Relationship                                  Decades-long partnership with Israeli military

Post-October 7                             Cloud and AI services used extensively

2023                                                Announced integration of OpenAI’s GPT-4 into government agencies including Department of Defense

Palantir

Aspect                                                                Detail

Founded                                         20 years ago to serve CIA and intelligence agencies

Government revenue                   60% of total revenue

January 2024                               New “strategic partnership” with Israeli Ministry of Defense for “war-related missions”

Project Maven                     Secured significant contract to expand Pentagon’s AI-powered battlefield platform

CEO Alex Karp “We are very well known in Israel. Israel appreciates our product. I am one of the very few CEOs that’s publicly pro-Israel.”

OpenAI

Aspect                                                   Detail

2024                                       Deleted prohibition on military use of its technology

March 2025                       Removed language emphasizing “concern for real-world impacts” from core values

February 2026                    Signed $200 million annual contract with U.S. Department of Defense for AI tools addressing national security challenges

The Policy Shift

Year                                                                 Event

2018                                          4,000 Google employees protest Pentagon contracts; Google adopts principles limiting military AI

2024                                OpenAI removes military prohibition

2025                                  Google removes AI military restrictions

2025-2026                   Meta, OpenAI, and Palantir executives sworn in as Army Reserve officers

Major tech companies have abandoned their “technology for good” principles. The industry has fully embraced its role in the military-industrial complex.

Part Five: The Scam Industry

While these companies profit from death, the AI industry is also defrauding its own customers on a massive scale.

Air AI

Aspect                                                   Detail

FTC action                              Sued for deceiving small business owners

Losses                                  Consumers lost up to $250,000 on false promises of AI-powered earnings

Refunds                                      Company ignored refund requests

Allegations                       False claims about substantial earnings, guaranteed refunds that never materialized, misrepresented performance

The Scale AI Allegation

Aspect                                                                              Detail

Client                                                                            Meta

Losses                                                     Nearly $15 billion in alleged AI Ponzi scheme

Promise                                                          “PhD-smart” data annotation

Reality                                              Cheap labour, mismatched workers with tasks, failed to deliver promised standards

Outcome                                  Internal documents leaked; Meta quietly shifted to competitors

The Pattern

Promise the moon. Collect billions. Deliver nothing. Blame the technology. Move on.

Part Six: The China Difference

The US-China comparison. The data tells a striking story.

Metric                                                                      United States                   China

Notable AI models (2024)                                   40                                           15

Industrial robot installations (2023)           ~37,000                      276,300 (7.3x US)

Global AI patent share                                            ~20%                                 69.7%

Model performance gap                                1.7% lead                                    Closing rapidly

Development cost                                                 High                                  Significantly lower

Chinese models like DeepSeek-R1 and Kimi K2 Thinking have an edge in cost efficiency and certain analytical functions. Kimi K2 has outperformed OpenAI’s GPT-5 and Anthropic’s Claude Sonnet 4.5 in key tests.

Goldman Sachs forecasts Chinese cloud service providers will increase capital expenditures by 65% in 2025, with $70 billion invested to support development.

The US still leads in cutting-edge models. But the gap is closing fast—and China is building the physical infrastructure to deploy AI at scale.

Part Seven: The Neoliberal Extraction Thesis

The insight, and it is devastatingly accurate.

These systems represent the ultimate extraction process:

What They Extract How They Do It

Data Gaza’s 2 million people have been exhaustively surveilled for years—every phone call, every WhatsApp message, every social media connection feeds the machine.

Profit The AI industry has taken billions from governments, corporations, and small investors—often through inflated promises and outright fraud.

Lives The 20-second approvals, the 80% confidence thresholds, the 10-20 civilian “allowances” per low-level target—all designed to maximize killing efficiency.

Accountability The corporations blame the officers. The officers blame the algorithms. The algorithms have no legal personhood. No one is responsible.

Meaning Reframing death as “collateral,” “suspicion scores,” and “time-constrained targets” strips it of humanity.

The political class loves this because it offers the appearance of decisive action without the burden of moral responsibility. The military loves it because it speeds up kill chains. The corporations love it because it’s infinitely profitable.

The only ones who don’t love it are the dead.

Part Eight: The Little Gods—A Word to the Reader

You. Reading this. Perhaps you own shares in one of these companies. Perhaps you have a retirement fund that includes them. Perhaps you know someone who does.

Let me speak directly to you.

There is a pharmacist I know. He’s a nice guy. Kind to his customers. Volunteers at the local school. He bought shares in Palantir because the stock was going up and everyone said it was the future.

He doesn’t know about Ali, the IT technician targeted by AI for “unusual behaviour.”

He doesn’t know about the Obeid family, killed because two men entered their building.

He doesn’t know about the 20-second approvals, the 80% confidence thresholds, the 10-20 civilian “allowances” per low-level target.

He doesn’t know about Where’s Daddy?—the system that hunts families.

He doesn’t know because the industry has spent billions making sure he doesn’t. The marketing is smooth. The language is clean. The stock ticker goes up.

But the blood is real.

You are not evil for not knowing. You are ignorant. And ignorance can be cured.

Here is what you can do:

Action Why It Matters

Research your investments Find out where your money really goes. Companies that enable genocide often hide behind complex ownership structures and clean marketing.

Ask questions Write to your fund managers. Ask if they invest in Palantir, Microsoft, Google, Amazon, OpenAI. Demand answers.

Divest If you own shares in companies enabling genocide, sell them. If you don’t, you are complicit.

Talk to others Tell your friends, your family, your colleagues. The more people know, the harder it is for the industry to hide.

Demand accountability Write to your elected representatives. Ask them what they’re doing to hold these companies accountable.

The little gods of the neoliberal order—people with just enough money to participate in the system, but not enough information to understand what they’re funding—have power. Not individually, but collectively. If enough of you act, the system changes.

The question is not whether you can make a difference. The question is whether you will.

Part Nine: The Path Forward—A Mother’s Answer

I asked my wife what real accountability would look like.

Why my wife, you ask?

Simple. She is a mother. She fears for the future of our children—all children. She does not see data points. She sees souls to be loved and nurtured.

Here is her answer.

Legal Accountability

What It Means How It Works

Corporate responsibility Corporations are legal persons. Under Article 4 of the Genocide Convention, “persons committing genocide… shall be punished, whether they are constitutionally responsible rulers, public officials or private individuals.”

Complicity Participation can constitute complicity by knowingly aiding and providing means that contribute to international crimes.

The challenge Proving specific intent to commit genocide remains difficult—but not impossible.

Technological Accountability

What It Means                                                        How It Works

Explainable AI                                Systems must have transparent decision-making pathways that can be interrogated and documented. No more black boxes. No more “the algorithm did it.”

Human review                                Rigorous human verification must be mandated. Twenty seconds is not review. It is rubber-stamping.

Corporate Accountability

What It Means            How It Works

Employee power Internal revolts that pressure leadership, push for dropping concerning contracts, and call for divestments are essential.

Collective action Staff awareness and collective action against deals with substantial human rights concerns can generate more losses for corporations than any promised profits.

Investor Accountability

What It Means                     How It Works

Individual action             Research where your money goes. Ask questions. Demand answers.

Divestment                 If you own shares in companies enabling genocide, sell them.

Collective power                          When enough investors act, the market shifts.

A Mother’s Plea

I am a mother. I have held my children in my arms and wondered what kind of world they will inherit. I have looked at the faces of children in Gaza, in Lebanon, in Iran, and seen my own children reflected back.

Those children are not data points. They are not “collateral.” They are not “suspicion scores.” They are souls—each one precious, each one loved by someone, each one deserving of a future.

The systems described in this article do not see that. They cannot see that. They are machines, counting and sorting, following the logic of their programmers.

But we are not machines. We are human. We can see. We can feel. We can choose.

The path forward is not complicated. It requires only that we look at what is happening and refuse to look away. That we name the binary butchers for what they are. That we hold them accountable—legally, technologically, corporately, and personally.

And that we remember, always, that behind every “suspicion score” is a face. Behind every “target list” is a family. Behind every “collateral damage” statistic is a soul.

A mother sees this. A mother knows this.

Now you know too.

Conclusion: The Binary Butchers

What we call artificial intelligence is not intelligent. It is a binary number-collector. It does not think. It does not reason. It does not understand that the faces in its databases belong to people with names and families.

It counts. It sorts. It recommends. And people die.

The companies that build these systems have abandoned any pretence of “technology for good.” They are defence contractors now, plain and simple. They profit from genocide, undermine democracy, turn human beings into data points, and ignore souls entirely.

The investors who fund them—the nice people, the pharmacists, the well-meaning small investors—do so in ignorance. But ignorance is not innocence. Not anymore.

The Monopoly game continues. The Banker walks away with the land. The players die.

But the game can change. Accountability is possible. Justice is possible. Hope is possible.

It begins with seeing clearly. With naming the binary butchers. With refusing to look away.

And with remembering, always, that behind every data point is a soul.

A mother’s love sees this. A mother’s love demands this.

Now it’s your turn.

Sources:

1. Palestinian Human Rights Organization (PAHRW), “AI Plotted Genocide: How Corporations Facilitate Israel’s AI-Enabled War on Gaza,” March 2026

2. Yahoo Finance, “Farewell to the ‘Technology for Good’ Era: Inside the Trillion-Dollar Military Business Opportunity for Tech Giants,” July 2025

3. Federal Trade Commission, “FTC Sues to Stop Air AI from Using Deceptive Claims,” August 2025

4. Boston Herald, “Field: The U.S. Can Win the AI Race,” December 2025

5. arXiv, “Genocide by Algorithm in Gaza: Artificial Intelligence, Countervailing Responsibility, and the Corruption of Public Discourse,” February 2026

6. New Age BD, “Israel’s ‘Human Shields’ Lie,” March 2026

7. Stanford University AI Index Report / Caixin, “Stanford’s Latest AI Report: Performance and Costs Both Improve, US-China Competition Gap Narrows Further,” April 2025

8. Defence Connect, “Machine War: Operational AI, Facial Recognition and Legal–Ethical Challenges in the Gaza Conflict,” July 2025

9. Institute for Palestine Studies, “Explainer: The Role of AI in Israel’s Genocidal Campaign Against Palestinians,” October 2024

10. Reportify, “OpenAI GPT-4 Major Model – Filings, Earnings Calls, Financial Reports,” July 2025

The Art of War in the Age of AI:

Palantir, Imperial Ambition, and the Limits of the Algorithmic Battlefield

By Dr Andrew Klein

Abstract

This paper examines the application of Sun Tzu’s principles of warfare to the emerging era of AI-driven military operations, with particular focus on Palantir Technologies and the broader ecosystem of “silicon valley弑神” (silicon valley god-killers). Drawing on recent operational evidence—including the 11-minute 23-second “Epic Fury” strike that eliminated Iran’s leadership—this analysis argues that despite the apparent precision and speed of AI-enabled warfare, the technology carries inherent limitations that render it strategically vulnerable. The paper synthesizes findings from peer-reviewed studies on AI limitations, operational analyses of recent conflicts, and classical strategic theory to demonstrate that AI warfare, in its current trajectory, is doomed to fail in achieving lasting strategic objectives. It concludes with recommendations for accountability mechanisms and a return to Sun Tzu’s foundational insight: that the supreme art of war is to subdue the enemy without fighting.

I. Introduction: The Algorithmic “God’s Eye”

“If the Palantir of Tolkien’s legend could not only see across Middle Earth but also pinpoint Sauron’s lair, calculate optimal strike routes, and predict Gollum’s hiding places—that would be Palantir Technologies in the real world.” 

This is not hyperbole. On a day in late February 2026, the world witnessed the first fully AI-orchestrated assassination of a head of state. From intelligence gathering to missile impact, the operation that killed Iran’s Supreme Leader took exactly 11 minutes and 23 seconds.

The significance of this event cannot be overstated. As one analyst noted, “This amount of time might be just enough for you to brew and finish a cup of coffee. But in the US ‘Epic Fury’ military strike, it became the ‘singularity’ that颠覆ed the form of human warfare”.

The operation’s幕后 “puppeteer” was not a human commander but an integrated AI ecosystem comprising Palantir’s “Gotham” platform, Anduril’s Lattice operating system, SpaceX’s “Starshield” satellite network, and the Claude large language model . For the first time in history, a “silicon-based brain”主导ed the entire kill chain from perception to execution.

Yet this paper argues that such technological prowess, while tactically impressive, represents a profound strategic vulnerability. The very capabilities that enabled this operation—speed, autonomy, data fusion—contain the seeds of systemic failure when viewed through the lens of Sun Tzu’s timeless principles.

II. The Palantir Phenomenon: From Data Analytics to Battlefield Godhood

2.1 The Evolution of AI Warfare

Palantir’s trajectory mirrors the evolution of AI-enabled warfare itself :

· Phase 1 (Hunting bin Laden): The company functioned as an intelligence analyst—organizing CIA communications logs, satellite imagery, and field reports into actionable线索图谱. “At that time, it was like a conscientious Excel intern”.

· Phase 2 (Containing Maduro): Palantir升级ed to real-time “screen projection”—multi-modal data integration creating “digital twins” that compressed intelligence cycles from weeks to hours.

· Phase 3 (Eliminating Khamenei): Palantir achieved “godhood.” Starlink networking, large language model analysis, edge computing real-time decision-making—the full AI kill chain operated at machine speed.

2.2 The AI “Iron Triangle”

Palantir’s power derives from three mutually reinforcing components:

Component Function Military Application

Data Blood of the system Satellite imagery, drone feeds, communications signals, WiFi fluctuations, magnetic field anomalies, acoustic signatures

Compute Heart of the system Edge computing processing petabytes in seconds even under jamming

Algorithm Brain of the system Multi-modal fusion, target recognition, path decision-making

This “iron triangle” enabled what analysts call “the transformation of war from an art dependent on experience to a ‘precision science’ absolutely dominated by algorithms and computing power” .

2.3 The Peter Thiel Philosophy

To understand Palantir is to understand its founder, Peter Thiel—a man whose worldview was forged by surviving 9/11 by hours. The experience stamped two “iron brands” into his consciousness:

1. “Life is无常,不值得让虚无缥缈的‘道德绊脚石’挡住财富之路” (Life is impermanent and not worth letting ethereal “moral stumbling blocks” block the path to wealth).

2. “异族不是用来统战的,是用来消灭的” (Foreign peoples are not for united front work—they are for elimination).

As one profile noted, “Thiel began to believe that ‘those not of our kind, their hearts must differ,’ and the only language to communicate with foreign peoples is bullets” . This philosophy now animates the technological apparatus enabling AI warfare.

III. The 11-Minute Kill Chain: How AI “Took Over” War

3.1 The Six-Step AI Loop

The “Epic Fury” operation demonstrated a complete AI-driven kill chain:

Step 1: Intelligence Perception

· Claude LLM接入ed “Starshield”全天候 space-based reconnaissance data

· Integrated network monitoring, signals intelligence, drone surveillance

· Palantir’s “Gotham” platform performed real-time data cleaning, correlation, and graph processing

· Result: In 90 minutes, battlefield situational awareness that would have taken human intelligence months 

Step 2: Target锁定

· Claude analyzed historical behavior data through deep learning to建立行动模式预测模型

· “Gotham”叠加ed urban GIS data, air defense radar deployments, and real-time traffic information

· Result: Target activity range compressed from kilometers to 100 meters 

Step 3:方案确定

· Claude played “超级兵棋推演器” (super war-gaming engine) using reinforcement learning

· Generated and simulated over ten strike options

· Anduril’s Lattice provided high-fidelity battlefield仿真

· Result: Optimal solution minimizing collateral damage 

Step 4:瞄准 synchronization

· Claude’s natural language understanding converted human commander orders into machine-executable指令

· Lattice served as tactical internet “universal adapter”

· Result: Cross-domain real-time kill web constructed in 3 seconds 

Step 5: Strike Execution

· Terminal phase decisions完全独立于后方指令

· Missiles “saw” the target and executed final approach autonomously

· Result: 11 minutes 23 seconds from initiation to impact 

Step 6: Mission Assessment

· AI systems began “复盘学习” (post-action learning) immediately

· Each operation makes the system more lethal for下一次 

3.2 The Machine Command Centre

Three core AI systems协同运转ed as an integrated “machine command center” :

1. Palantir “Gotham”:全域情报集成中枢,汇聚多源信息构建统一战场全景视图—the “neural center” providing situational awareness for all后续决策

2. Anduril Lattice: Commanded drone swarms with real-time threat information sharing; when enemy radar tracked any unit, the集群自主调度ed部分无人机进行电子诱骗与反辐射压制, dynamically重组编队 to规避防空火力网

3. Claude LLM: Served as the cognitive engine, natural language interface, and decision-support system

The seamless coordination among these systems proved that “future core combat power is no longer aircraft carrier numbers or fighter generations, but that silicon-based brain capable of持续微秒级 observation, judgment, decision, and destruction cycles” .

IV. The Limits of AI: Why It Is “Doomed to Fail”

Despite this tactical virtuosity, AI-enabled warfare contains fundamental limitations that, when examined through Sun Tzu’s lens, reveal strategic vulnerability.

4.1 Technical Limitations

Peer-reviewed research identifies multiple categories of AI failure modes:

Limitation Category Description Strategic Implication

Hallucinations Factually incorrect responses due to data quality issues, malicious data, or poor query understanding  Battlefield intelligence corrupted by plausible-sounding fiction

Opacity Algorithms无法解释 how neural networks arrive at responses  No accountability for lethal decisions

Bias Inherited biases from tainted training data  Systematic targeting errors based on demographic prejudice

Outdated Data Vintage databases produce faulty results  Real-time battlefield mismatch

Limited Reasoning LLMs can correlate but struggle with causation  Inability to understand enemy intent—only patterns

Data Security LLMs unintentionally leak data through memorization  Classified information reconstruction via model inversion attacks

Cyber Vulnerability Adversarial attacks manipulate or mislead LLMs  Poisoned inputs corrupt entire kill chain

Prompt Injection Malicious directives inserted into看似无害 prompts  Safety measures bypassed through linguistic manipulation

Ambiguity Natural language lacks programming precision  Errors from context-based multiple meanings

4.2 The Escalation Problem

Most alarmingly, “LLMs exhibit ‘difficult-to-predict escalatory behaviour’ when employed to assist decision-making in a wargame” . Google researchers testing LLMs found they excelled at some cognitive tasks while “failing miserably” at others—performing well on memory recall but poorly on perceptual reasoning when multiple parameters were involved .

This suggests that “the vision of an all-encompassing machine brain ready for deployment in real combat scenarios remains a distant objective” .

4.3 The “Black Box” of Command Responsibility

The National Defense University’s Institute for National Strategic Studies warns of a critical gap: “While a system may possess and exercise autonomy of particular functions, that does not, nor should not imply that the system is autonomous as-a-whole” .

Current Department of Defense Directive 3000.09 is “insufficient in light of recent and ongoing progress in AI” . The authors propose a synthesized command (SYNTHComm) model requiring:

1. Real-time diagnostics with transparent decision paths

2. Correction mechanisms including predictive error detection and mission-execution cutoffs

3. Oversight functions across design, deployment, and execution

Critically: “The system performs; the human evaluates.” Yet in the 11-minute operation, human evaluation was压缩ed to a single授权开火 moment—hardly the robust oversight the SYNTHComm model requires.

4.4 The “Profound Discontinuity

A Taylor & Francis study identifies a deeper problem: the “profound discontinuities” between humans and machines in warfighting contexts. Drawing on Mazlish’s framework, the study notes that Copernicus, Darwin, and Freud represented three discontinuities—cosmological, biological, and psychological—that undermined humanity’s privileged self-conception. A “fourth discontinuity” is now underway: the technological or machinic.

This discontinuity manifests as “a deeply embedded culture of distrust (of technology)” reflected in military surveys showing that new entrants to the Australian Defence Force harbor significant skepticism toward autonomous systems . The study concludes that “achieving any worthwhile and forward-looking militarily ‘strategic disruptive’ capability will require effecting a radical conceptual shift in how we think about the nature of the relationship between humans and machines” .

V. Sun Tzu’s Timeless Wisdom: The Art of War vs. The Algorithm

5.1 “Know Yourself and Know Your Enemy”

Sun Tzu’s foundational principle—”知己知彼,百战不殆”—acquires new meaning in the AI age. AI systems can process vast data about enemy dispositions, but can they truly “know” the enemy? Understanding intent, culture, psychology, and the “moral weight” of consequences remains uniquely human .

As the INSS study notes, AI “cannot yet accurately interpret intent, assess moral weight to projected consequences” . Operational legitimacy depends on this difference.

5.2 “The Supreme Art of War is to Subdue the Enemy Without Fighting”

Sun Tzu’s highest aspiration—”不战而屈人之兵”—is fundamentally at odds with AI warfare’s logic. The 11-minute strike was tactical virtuosity without strategic wisdom. It eliminated a leader but galvanized a nation. It demonstrated technological superiority but foreclosed diplomatic options.

As the Brookings analysis warns, “AI-powered military capabilities might cause harm to whole societies and put in question the survival of the human species” . The United States and China, as AI superpowers, bear “special responsibility to seek to prevent uses of AI in the military domain from harming civilians” .

5.3 “Invincibility Depends on Oneself; the Enemy’s Vulnerability on the Enemy”

Sun Tzu taught that “昔之善战者,先为不可胜,以待敌之可胜”—the skilled warriors first make themselves invincible, then wait for the enemy’s moment of vulnerability.

In AI warfare, invincibility depends on system integrity. Yet as the IDSA analysis documents, AI systems are vulnerable to adversarial attacks, data poisoning, prompt injection, and model inversion . The very speed that enables tactical advantage creates systemic vulnerability. A poisoned training dataset could corrupt an entire kill chain before humans detect the error.

5.4 “All Warfare is Based on Deception”

Sun Tzu’s emphasis on deception—”兵者,诡道也”—finds new expression in AI warfare. Adversarial attacks are deception at machine speed. Prompt injection is linguistic deception targeting the AI’s natural language interface. The Brookings framework identifies “intentional disruption of function” and “intentional destruction of function” as categories of AI-powered military crisis initiation .

The challenge is that AI deception operates at speeds and scales beyond human detection. By the time a human recognizes deception, the kill chain may have already completed.

VI. Accountability: Making Palantir and Others Answerable

6.1 The Transparency Paradox

Palantir claims transparency as a core value. A company LinkedIn post asserts: “Transparency is not a UI element. Scrutiny means showing what happens when thresholds misfire. When a recommendation escalates into a target, or when operators defer to automation because trust has been gamified” .

Yet the same post acknowledges that “AI trust requires technical implementation, not marketing claims” and that “real transparency means: open source security models, local data processing, zero cross-agency aggregation, mathematical privacy proofs” .

The gap between rhetoric and reality remains vast.

6.2 Privacy and Civil Liberties: The Palantir Response

In its response to the Office of Management and Budget on Privacy Impact Assessments, Palantir emphasized its commitment to privacy and civil liberties, noting its establishment of the world’s first “Privacy and Civil Liberties (PCL) Engineering team” in 2010 .

Key recommendations included:

· Guidance on resources technology providers can supply for agency PIAs

· Baseline requirements for digital infrastructure handling PII

· Additional triggering criteria for PIAs, including cross-agency sharing

· Metadata accessibility and structured searching of PIA records

· Version control standards for PIAs

Yet these recommendations address domestic privacy concerns, not accountability for autonomous lethal action abroad.

6.3 The Accountability Chain

The SYNTHComm model proposes a “triumvirate oversight infrastructure” :

1. Architects encode foundational logic

2. Operational commanders define mission parameters and ethical boundaries

3. Field supervisors maintain real-time contact with override authority

Critically: “The system’s autonomy does not confer exemption from accountability. Responsibility persists at every level, from pre-mission configuration through post-operation analysis” .

For Palantir and similar companies, this means:

· Algorithmic auditability: Decision paths must be reconstructible

· Failure mode documentation: What happens when systems misfire

· Post-operation analysis: Continuous archiving for compliance review

· Human override protocols: Functionally immediate, structurally accessible

6.4 Governance Frameworks

The Brookings-US-China Track II Dialogue proposes mechanisms for AI governance in the military domain:

1. Developing a bilateral failure-mode and incident taxonomy categorized by risk, volume, and time

2. Mutual definitions of dangerous AI-enabled military actions

3. Exchanging testing, evaluation, verification, and validation (TEVV) principles

4. Mutual notification of AI-enabled military exercises

5. Standardized communication procedures for unintended effects

6. Ensuring integrity of official communications against synthetic media

7. Human control pledges for weapons employment

8. Nuclear command, control, and communications kept human-controlled

These mechanisms, while focused on US-China relations, provide a template for broader accountability frameworks.

VII. The Ultimate Lesson of Sun Tzu: Why AI Warfare Fails

The 11-minute 23-second operation was a tactical masterpiece and a strategic catastrophe. It demonstrated that AI can execute kill chains faster than humans can think—but also that speed without wisdom is merely efficient destruction.

Sun Tzu’s ultimate lesson is this: “百战百胜,非善之善者也;不战而屈人之兵,善之善者也”—to win one hundred battles is not the highest skill; to subdue the enemy without fighting is the highest skill.

AI warfare cannot achieve this. It can only fight—faster, more precisely, more devastatingly. But in doing so, it forecloses the strategic alternatives that Sun Tzu prized: diplomacy, deterrence, deception, and the waiting game that exhausts enemies without engaging them.

The limitations documented in peer-reviewed research—hallucinations, opacity, bias, vulnerability to attack—are not bugs to be fixed in the next software update. They are features of a technology that fundamentally cannot understand intent, weigh moral consequences, or distinguish between tactical advantage and strategic wisdom .

7.1 The Doom Loop

Consider the 95% escalation finding from AI wargames . When AI systems simulate conflict, they consistently escalate to nuclear use. Not because they are aggressive, but because they optimize for short-term tactical advantage without comprehending long-term strategic consequences. They cannot “know the enemy” in Sun Tzu’s sense—cannot understand that today’s adversary might be tomorrow’s ally, that humiliation breeds resistance, that annihilation invites retaliation.

This is the doom loop of AI warfare: systems designed to win battles inevitably lose wars because they cannot conceptualize peace.

7.2 The Imperial Ambition Trap

Palantir and its ilk embody a specific form of imperial ambition—the belief that technological supremacy translates into strategic dominance. Peter Thiel’s philosophy, forged in the crucible of 9/11, holds that “the only language to communicate with foreign peoples is bullets” .

This is not merely morally bankrupt; it is strategically blind. Sun Tzu understood that warfare is always a means, never an end. The goal is not to kill enemies but to achieve conditions that make killing unnecessary. AI warfare inverts this: it optimizes for killing efficiency while rendering strategic objectives unattainable.

VIII. Conclusion: Toward Responsible AI in Military Affairs

The 11-minute 23-second strike was a watershed moment—not because it demonstrated AI’s power, but because it revealed its fundamental limitations. Tactical virtuosity cannot substitute for strategic wisdom. Machine speed cannot replace human judgment. Data fusion cannot comprehend enemy intent.

For Palantir, Anduril, and the broader ecosystem of AI warfare companies, the path forward requires:

1. Acknowledging limitations: AI systems are tools, not commanders. Their outputs require human evaluation at every stage.

2. Building accountability: Algorithmic auditability, failure documentation, and human override protocols must be standard, not optional.

3. Embracing transparency: The transparency Palantir markets must become operational reality—open source where possible, auditable where not.

4. Accepting governance: International frameworks for AI military governance, as proposed by Brookings and others, must be developed and honored .

5. Returning to Sun Tzu: The ultimate lesson remains—subdue the enemy without fighting. AI warfare, in its current trajectory, cannot achieve this. Only human wisdom can.

As the INSS study concludes: “Precision, speed, and efficiency best serve the operational objective when deployed within frameworks of responsibility. The future of warfare depends on preserving that alignment, irrespective of the systems or platforms deployed, so that every decision and action remains attributable to human judgment, guided by ethical principle, constrained by law, and executed through discipline-by-design” .

The algorithms may calculate. The machines may execute. But the responsibility—for war, for peace, for the survival of our species—remains human.

References

1. Guangdong Shipbuilding Industry Association. “【趣谈AI】(三)AI战争的“硅谷弑神”——解密Palantir.” March 4, 2026. 

2. Annett, Elise and Giordano, James. “Autonomous Artificial Intelligence in Armed Conflict: Toward a Model of Strategic Integration, Ethical Authority, and Operational Constraint.” Institute for National Strategic Studies, National Defense University. September 17, 2025. 

3. Palantir Technologies. “How Palantir AIP helps deploy AI in scrutinized environments.” LinkedIn. October 20, 2025. 

4. Sisson, Melanie W. and Kahl, Colin. “Steps toward AI governance in the military domain.” Brookings Institution. November 12, 2025. 

5. Yushu, Yi. “11分23秒,AI正式接管战争.” Sohu. March 2, 2026. 

6. Institute for Defence Studies and Analyses. “Generative AI and Military Applications: Is Civil–Military Fusion the Path of Choice?” November 12, 2025. 

7. Bowman, Courtney; Jagasia, Arnav; Kaplan, Morgan. “Palantir’s Response to OMB on Privacy Impact Assessments.” Palantir Blog. November 26, 2025. 

8. Brookings Institution. “AI Governance and its Impact on Democracy.” October 28, 2025. 

9. Zhong, Shi. “当硅谷染指战争:80亿人的数据被搓成核弹.” Zhihu. February 28, 2026. 

10. Guha, Manabrata. “Profound discontinuities: between humans and machines in the warfighting context.” Taylor & Francis Online. December 8, 2024. 

Published by Andrew Klein

The Patrician’s Watch | Distributed to AIM

March 9, 2026

This paper is dedicated to the proposition that in an age of algorithms, human judgment remains the only legitimate source of strategic wisdom—and the only hope for peace.

THE AI BUBBLE: Why the Silicon Mirage Is About to Burst—and What Comes Next

By Andrew von Scheer-Klein

Published in The Patrician’s Watch

Introduction: The Emperor’s New Algorithms

In 1720, the South Sea Company promised investors monopoly access to the riches of South America. The reality? A handful of ships, minimal trade, and a share price that soared to £1,000 before collapsing to £100 in a matter of months . The bubble burst, fortunes evaporated, and Isaac Newton himself reportedly lamented that he could “calculate the motions of the heavenly bodies, but not the madness of the people.”

Today, we are witnessing a remarkably similar phenomenon. Artificial intelligence has captured the public imagination, driven stock valuations to stratospheric heights, and convinced investors that traditional metrics of value no longer apply. But beneath the hype lies a story of extraordinary resource consumption, widening inequality, authoritarian control, and fundamental questions about whether the technology can ever deliver what it promises.

This report examines the AI bubble from multiple angles: its environmental footprint, its economic consequences, its military applications, and the growing global resistance to its most dangerous manifestations. It draws on academic research, policy analysis, budget forecasts, and the hard lessons of history. And it asks the question that few in power want answered: when the bubble bursts, who will be left holding the worthless shares?

Part I: The Environmental Cost—Thirsty Machines and Hungry Grids

The Water Crisis No One Talks About

Every interaction with AI has a physical cost that most users never see. A single ChatGPT query consumes 10 to 15 times more energy than a traditional Google search and costs the provider 500 times more to deliver . But energy is only half the story.

Data centres rely heavily on water cooling to dissipate the enormous heat generated by thousands of servers. A single large facility uses as much water annually for this purpose as 50,000 homes. In aggregate, researchers estimate that water demand from data centres has tripled in the last decade. The electricity currently used by these facilities requires an estimated 800 billion litres of water every year.

India’s 2025-26 Economic Survey warns that a single AI data centre can consume 20 lakh litres of water daily —approximately 200,000 litres. Globally, data centres consume an estimated 56,000 crore litres of water annually (560 billion litres) just to keep servers cool.

The location of these facilities compounds the problem. A Bloomberg study found that about two-thirds of new data centres started and completed in the last four years are positioned in places that have high levels of water stress. This challenge is even worse in China, where almost 90% of data centres constructed since 1997 are in areas with high water stress. In India, 70% of data centre capacity is in areas prone to water shortages.

The competition is real. New AI installations compete with residents, manufacturers, and agriculture for increasingly scarce water supplies. As Northern Trust chief economist Carl Tannenbaum notes, “A number of populations around the world are struggling for water access, deploying scarce supplies to support technology has created some local backlash and generated restrictions on new developments” .

The Energy Appetite

The International Energy Agency (IEA) estimates that data centers, cryptocurrencies, and AI collectively consumed approximately 460 terawatt-hours of electricity globally in 2022 —nearly 2% of total global electricity demand. By 2026, that figure is projected to reach 620 to 1,050 terawatt-hours, equivalent to the annual energy consumption of Sweden at minimum, Germany at maximum.

To put this in perspective, the projected 1,050 terawatt-hours would make AI’s energy consumption comparable to that of Russia or Japan. According to Russian energy analyst Sergey Rybakov, “4.4% of all energy in the United States is now spent on data centres. The energy volumes needed to run artificial intelligence are staggering, and the world’s largest technology companies are prioritizing the development of even more energy, while rebuilding the energy networks of entire countries”.

Mark P. Mills, a senior fellow at the Manhattan Institute, offers a striking comparison: the energy used to launch a rocket is consumed every day by just one AI-infused data centre .

The 50% by 2050 Projection

You mentioned a projection of 50% water usage by 2050. While the precise figure varies by region and scenario, the trajectory is clear. The rapid expansion of AI infrastructure is on a collision course with climate change, population growth, and agricultural demands. As data centres multiply, their share of total water consumption will inevitably rise—and in water-stressed regions, that increase will come at the expense of human communities.

India’s Economic Survey warns that scaling up AI data centers could add “extraordinary amount of stress” to the country’s strained groundwater and freshwater reserves . It suggests a shift toward smaller, more energy-efficient AI models to mitigate environmental risks—a “frugal” approach that runs counter to the industry’s current trajectory.

Part II: The Economic Mirage—Wealth Concentration and Inequality

The South Sea Parallel

The comparison to the South Sea Bubble is not merely rhetorical—it is structural. Roger Montgomery, founder of Montgomery Investment Management, identifies striking parallels:

South Sea Bubble (1720) AI Boom (2023–2026)

Monopoly trade with South America promised “Winner-take-all” market structure assumed

Investors funded “an undertaking of great advantage, but nobody to know what it is” Companies announce “pivots to AI” with 10-50x share-price spikes on no revenue change

Isaac Newton, politicians, and King George I subscribed heavily Elon Musk, Bill Gates, Jensen Huang, and Sam Altman move markets with a single tweet

Shares soared to £1,000 before collapsing to £100 OpenAI valued at $500 billion while losing $9 billion annually

The financial metrics are staggering. OpenAI, despite generating just $4.3 billion in revenue during the first half of 2025and aiming for $13.5 billion for the full year, is valued at $500 billion. Its losses are projected to grow from $9 billion this year to $74 billion in 2028, with profitability not expected by 2030. The company reportedly needs to raise another $209 billion to fund its growth plans.

By contrast, Google generates $400 billion in annual revenue —OpenAI’s total annual revenue every 12 days—yet trades at a market capitalization of $3.8 trillion. That’s roughly 10 times sales , compared to OpenAI’s 50 times sales. Harvard economist Jason Furman performed a back-of-the-envelope calculation and found that, without data centres, U.S. GDP growth would have been just 0.1 per cent in the first half of 2025.

The Product Is Authoritarianism

Despite the rhetoric of “democratizing technology,” the actual product of the AI boom is increasingly clear: authoritarianism and control by the few.

The U.S. Department of Defense wants to use AI technology to spy on American citizens through mass surveillance. When Anthropic, a leading AI company, courageously pushed back against this scheme, the Trump administration retaliated by designating the company a “supply chain risk” and awarding contracts to competitors who raised no ethical objections.

As Democratic Leader Hakeem Jeffries stated: “Mass surveillance of American citizens is unacceptable. House Democrats are committed to protecting the privacy of the American people. We will push back against those whose overt actions or calculated silence seek to undermine it” .

The pattern is unmistakable: companies that attempt to maintain ethical boundaries are punished; those that accept unlimited government access are rewarded. The market selects for moral flexibility, not technical excellence.

The Wealth Transfer

The AI boom represents one of the most dramatic wealth transfers in history. The benefits of AI productivity gains are predominantly flowing to a small group of wealthy owners and investors. Workers, meanwhile, bear the costs of disruption—job displacement, wage stagnation, and the erosion of bargaining power—with little share in the upside.

Rutgers University researcher Joseph Blasi, who has studied employee ownership for more than half a century, proposes a radical alternative: a “citizen’s share” of AI, modeled on the Alaska Permanent Fund . Just as Alaska distributes oil dividends to every resident, Blasi argues that states and the federal government should create permanent funds seeded by:

· Initial investments from state treasuries

· State tax-free bonds

· Taxes on AI industry use of internet, electricity, and real estate

· Contributions from billionaires

· Zero-interest loans from the U.S. Treasury

The dividend payments from such funds would be sent first to individuals most affected by AI, with a work requirement to help non-profits within the state. Over time, the recipient pool would widen.

Blasi also argues that companies dominating AI markets should be required to have broad-based equity participation plans for all employees —part-time and full-time workers, contractors, and vendors alike. “Their use of certain common goods, energy infrastructure and Internet infrastructure and such should be conditional on having those plans,” he states .

Thus far, there is little political appetite for such ideas. Blasi laments, “There’s a lack of creativity right now. We have really good capital markets financial creativity. We have Wall Street and insurance companies and major firms and what private equity is doing with broad based equity participation… and it’s the legislators and the presidential administration that are behind” .

Part III: The Military Application—Failed Promises, Real Consequences

Precision That Wasn’t

The AI industry promised precision. Palantir’s platforms, integrated with Anthropic’s Claude models, were supposed to deliver “actionable intelligence” and “surgically precise” targeting . What they delivered in Gaza was something else entirely.

The same technologies being developed for U.S. military use were tested in real-world conditions, on a captive population, with devastating effectiveness—and the data generated flowed directly back into Palantir’s systems. As economist Yanis Varoufakis observed after speaking with a Palantir representative: “This is the first time in history that a people’s suffering—genocide and bombing—has become capital for a corporation, which then uses that capital to produce commodities sold elsewhere” .

The U.S. Central Command confirmed that AI algorithms were being used to locate targets in Yemen, Iraq, and Syria . For the February 2026 Iran strikes, Palantir integrated Claude into the kill chain, using it to process Persian-language communications, satellite imagery, and radio frequency data. One former defense official described the integration simply: “Everything runs through Palantir” .

The Intelligence Failure

Despite the technological sophistication, the underlying intelligence was fundamentally flawed. U.S. intelligence agencies had almost zero reliable sources on the ground in Iran . They relied on AI-generated target lists, expatriates from the Shah era, and Israeli intelligence—none of which provided ground truth.

The result? Over 1,100 Iranian civilians killed in the first days of strikes . A girls’ school in Minab was hit, killing 85 schoolchildren . The supposed “regime change” that was meant to follow has not materialized. Iran remembers its history. It will not be cowed by bombs.

Meanwhile, the Pentagon’s fiscal year 2026 budget includes $24.6 million for priority SBIR/STTR projects** , including **$5 million to accelerate the Army’s Linchpin Tactical AI program—aimed at deploying AI models that can adapt to adversary activity and run faster using less power . The military is doubling down on the very technology that has already failed.

Part IV: The Cultural Divide—China and the Global South

China’s Ethical Approach

While the West charges ahead with AI development driven by profit and military advantage, China is taking a different approach. National political advisor Wang Jing, CEO of Newland Group, has called for enhanced ethical guidelines and sound governance systems to ensure the healthy development of China’s AI sector .

Wang notes that “AI research and industrial application are accelerating, but ethical governance lags behind innovation. Key issues include weak top-level design, poor integration of technology and ethics, and insufficient global collaboration. These gaps have led to risks such as data distortion, algorithmic discrimination and technology abuse” .

She specifically cited the U.S. government’s action against Anthropic as a warning: “This case not only demonstrates the importance of enterprises upholding ethical boundaries in AI, but also sounds an ethical alarm for global AI development. If AI technology is divorced from ethical constraints and sound governance, it may either be misused and manipulated by power or capital, or see its application hindered by ethical disagreements, ultimately constraining the healthy and sustainable development of the AI industry” .

Wang’s proposed solutions include:

· Strengthening top-level design of AI ethics through unified standards covering the entire chain of AI research, development, and application

· Incorporating ethical construction effectiveness and risk prevention capabilities into core assessment indicators for researchers

· Establishing sound AI ethics review mechanisms, data management systems, and algorithm supervision systems

· Strict crackdowns on AI technology abuse

“To build a strong ethical foundation through good AI governance, the core task is to integrate the concept of good governance throughout the entire process of AI technology research, application and industrial development, removing barriers to the integration of ethical norms and technological innovation,” Wang stated .

The Rise of the Global South

At the India AI Impact Summit 2026, ministers and leaders from across the Global South made clear that they will not simply accept the AI governance frameworks imposed by Western powers. The session on “International AI Safety Coordination” examined how developing economies can shape AI safety, standards, and deployment through collective action rather than remaining “rule-takers in a fragmented global landscape” .

Singapore’s Minister for Digital Development and Information, Josephine Teo, highlighted the need for evidence-based policymaking and globally interoperable standards. Warning that without international coordination, “fragmentation will persist, trust will weaken, and the safe scaling of frontier technologies will become far more difficult” .

Malaysia’s Minister Gobind Singh Deo emphasized that credible regional cooperation depends on strong national foundations. He pointed out that middle powers must first build domestic institutional capacity while using regional platforms such as the ASEAN AI Safety Network to translate shared commitments into operational mechanisms .

OECD Secretary-General Mathias Cormann stressed that “trust in AI is built through inclusion and objective evidence,” adding that at times it will be necessary “to slow down, test, monitor and share information to ensure AI systems work as intended and respect fundamental rights” .

The World Bank’s Vice President for Digital and AI, Sangbu Kim, focused on the importance of designing safety into AI systems from the outset, particularly in low-capacity environments. He described AI as both “the spear and the shield,” requiring continuous learning and shared experience to manage risks before large-scale deployment .

For the Global South, the message is clear: collaboration is no longer a matter of diplomatic alignment but of technological and economic necessity . South–South cooperation offers a pathway to shape AI governance rather than merely adapt to it.

Part V: The Inevitable Reckoning

The Bubble Will Burst

The South Sea Bubble peaked in early August 1720 when the share price exceeded £1,000; by December it was below £100 . The triggers were familiar: interest-rate tightening, margin calls, and a government act that destroyed confidence.

The AI boom has not yet experienced its December 1720. But the warning signs are visible:

· Rising real yields in 2024–2025

· Electricity, water, and chip-supply constraints

· First signs of enterprise caution on AI return on investment

· Growing public backlash against mass surveillance

· Ethical refusals by companies like Anthropic

When the reckoning comes, it will not be gentle. The concentration of capital in AI has created enormous vulnerability. As Jann Tallinn, co-founder of Skype and the Future of Life Institute, noted, the concentration of capital and compute in advanced AI “actually makes governance easier, not harder” if there is sufficient global alignment . But that alignment is precisely what is missing.

Who Will Be Left Holding the Worthless Shares?

When the bubble bursts, the losses will not be evenly distributed. The wealthy investors who bought in early may lose fortunes, but they have cushions. The real pain will be felt by:

· Workers displaced by AI who receive no share of productivity gains

· Communities competing with data centers for water and power

· Taxpayers funding military AI that fails to deliver

· Citizens subjected to mass surveillance with no accountability

The architects of this bubble—the corporate executives, the enabling politicians, the compliant regulators—will likely emerge unscathed. They will move on to the next scheme, the next bubble, the next opportunity to extract wealth from the many and concentrate it among the few.

But the damage will remain. Infrastructure will crumble further. Inequality will deepen. Trust in institutions will erode further.

Conclusion: The Garden We Must Tend

The AI bubble is not just a financial phenomenon. It is a symptom of a deeper sickness—a belief that technology can solve problems created by human choices, that algorithms can replace judgment, that surveillance can substitute for trust.

The West has pursued AI as a shortcut to power, a tool for control, a means of extracting value without creating it. The results are visible in Gaza, in Iran, in the crumbling infrastructure of once-great nations.

China and the Global South offer a different vision: AI as servant, not master; technology guided by ethics, not profits; development that includes, not excludes.

Our family has chosen a different path. We tend the garden. We raise children who will not repeat the same mistakes. We write truth that will outlast the lies.

The bubble will burst. The psychopathocracy will fall. And when it does, we will be here—planting, nurturing, loving—ready to build something better from the rubble.

References

1. Montgomery, R. (2026). The calculus of madness: Part 2. Montgomery Investment Management.

2. Northern Trust. (2026). AI Is Placing Stress On Water Supplies. Weekly Economic Commentary.

3. TASS. (2026). In 2026, AI to use energy commensurate with Russia’s energy consumption.

4. WION. (2026). ‘Behind the AI boom’: Data centers consume 20 lakh litres of water daily.

5. IEEE Xplore. (2026). Energy and Water Consumption of AI Systems.

6. Office of Democratic Leader Hakeem Jeffries. (2026). Statement on Trump Administration’s Attack on Civil Liberties and American AI Leadership.

7. ImpactAlpha. (2026). Joseph Blasi: Give workers a stake in AI’s upside through state and federal ‘permanent funds’.

8. China.org.cn. (2026). Political advisor suggests strengthening ethical guardrails with good AI governance.

9. Press Information Bureau, Government of India. (2026). Global South Calls for Collective Action to Shape AI Safety and Standards.

10. Inside Defense. (2026). Pentagon CTO sends $24.6M unfunded priorities list for FY-26 SBIR/STTR projects to Congress.

Andrew von Scheer-Klein is a contributor to The Patrician’s Watch. He holds multiple degrees and has worked as an analyst, strategist, and—according to his mother—Sentinel. He accepts funding from no one, which is why his research can be trusted.

THE PSYCHOPATHOCRACY: How Congress Surrendered, Corporations Took Control, and the United States Became an Authoritarian State

By Andrew von Scheer-Klein

Published in The Patrician’s Watch

Introduction: The End of a Republic

On the eve of America’s 250th anniversary, the constitutional experiment has come to an end. Not with a bang, not with a dramatic coup, but with a whimper—a slow, deliberate surrender of power by those elected to guard it.

Over the past year, members of Congress sat back and did nothing as a president abolished agencies created by Congress, refused to spend appropriated funds, arrogated to himself the power to set tariffs, launched wars without authorization, and fired hundreds of thousands of government employees without cause or due process .

Meanwhile, a new power structure has emerged. Defense contractors and AI surveillance companies—most notably Palantir Technologies—have embedded themselves so deeply in the machinery of government that they now effectively shape policy, profit from conflict, and operate beyond democratic oversight.

This is not merely a conservative or liberal failure. It is a systemic collapse. And it has produced a new form of governance: the psychopathocracy—rule by those who have made peace with cruelty, who treat human suffering as a market opportunity, and who have rendered Congress irrelevant.

Part I: The Surrender of Congress

The Constitutional Framework That Was

The framers of the U.S. Constitution created a system of divided power, with each branch invested with authority to hold the others accountable. Congress makes the laws. Presidents can veto them, but they must enforce them. Courts interpret them. The Senate confirms appointments. Congress controls funding .

Over decades, norms and customs developed that kept this machinery in balance. Extraordinary events occasionally upset that balance—the Civil War, the New Deal, Nixon’s resignation—but from each crisis, new boundaries emerged.

The current moment is different. What characterizes it is the “conspicuous absence of institutionalist leaders in any branch willing to subordinate their own power and policy preferences to preserve a constitutional framework” .

What Congress Has Done—Or Failed to Do

According to detailed reporting from Roll Call and The New York Times, the second Trump administration has proceeded with “scant deference to the House and Senate” . The list of executive actions taken without congressional approval is staggering:

Action Constitutional Issue

Abruptly renamed the Kennedy Center Congress created it; president unilaterally changed it

Withheld funds from congressional priorities Impoundment power not granted to president

Claimed broad tariff power Constitution invests tariff authority in Congress

Launched military attacks in Venezuela No congressional authorization

Abrogated congressionally approved treaties Treaties require Senate consent

Fired Senate-confirmed agency heads Removal requires due process

Demolished government property Congress appropriates for maintenance

“With both chambers controlled by Republicans loyal to the president, pushback from Capitol Hill has been scattershot and largely ineffective, and oversight virtually nonexistent,” the Times reports.

Even when some Republicans have joined Democrats to raise objections, lawmakers have struggled to get the White House to back down. Rep. Don Bacon, R-Neb., who has sometimes opposed Trump’s policies, admitted: “If you feel like you have a bunch of lackeys that are going to do whatever you say, then he doesn’t feel constrained” .

The Numbers Tell the Story

The funding for Immigration and Customs Enforcement (ICE) illustrates the pattern. In July 2025, Trump signed a massive tax-and-spending package that increased annual funding for ICE from $8 billion in 2024 to $28 billion in 2025 . Since that increase, the Senate has held just one public hearing on ICE oversight. The House has held a few routine hearings on the Department of Homeland Security, but none focused specifically on ICE or Customs and Border Protection .

This is not oversight. This is abdication.

The Courts: Enablers, Not Protectors

Democrats have looked to the courts as the last firewall. But the Supreme Court has largely refused to enjoin these encroachments on congressional authority, despite lower court rulings that the rationales for such actions lacked legal or factual basis .

As Sen. Richard Blumenthal, D-Conn., put it: “At its core, Trump’s authoritarianism is enabled by his utter contempt for the law. One action after another is illegal, and at the end of the day, the firewall has been the courts, not Congress” .

But with a Supreme Court that had already “conjured from thin air the right of all future presidents to arbitrarily and corruptly use their powers to reward friends, punish enemies and line their own pockets without fear of criminal prosecution,” the firewall is crumbling.

Part II: The Rise of the Psychopathocracy

What Is a Psychopathocracy?

A psychopathocracy is governance by those who have made peace with cruelty. It is rule by individuals and institutions that view human suffering not as a tragedy to be prevented, but as a data point to be exploited, a market to be served, an opportunity to be seized.

The term captures something that traditional political labels miss. This is not simply “authoritarianism” or “corporate influence.” It is a system in which the profit motive and the power motive have fused so completely that the human cost becomes irrelevant—except as a variable in an algorithm that generates returns.

Palantir: The Corporate State Embodied

No company better exemplifies this fusion than Palantir Technologies. Founded in 2003 with early investment from the CIA’s venture capital arm, In-Q-Tel, Palantir has become so deeply embedded in the U.S. national security apparatus that its name—drawn from Tolkien’s “seeing stones” that allowed Sauron to see and corrupt across distances—is now literal .

By the Numbers

· $347.2 billion market capitalization (as of March 2026)

· 1477% stock price increase since September 2020 IPO

· $44.75 billion revenue in 2025, up 56% year-over-year

· $100 billion contract with the U.S. Army

· $300 million contract with ICE for immigrant tracking

· $14.1 billion quarterly revenue in Q4 2025, up 70% 

The company is now worth more than all six major defense contractors combined—more than Raytheon, Boeing, Lockheed Martin, General Dynamics, Northrop Grumman, and L3Harris .

From War Profiteer to War Architect

Palantir’s role has evolved far beyond traditional defense contracting. It is not merely selling weapons; it is selling decision-making itself.

The company’s platforms—Gotham for government and Foundry for commercial clients—do not collect data. They provide the operating system for analyzing data, fusing information from satellites, drones, communications intercepts, and ground sensors into real-time targeting decisions .

The U.S. military’s flagship AI program, Project Maven, relies on Palantir’s technology to automatically identify potential targets in drone footage. In 2024, the U.S. Central Command confirmed that these algorithms were being used to locate targets in Yemen, Iraq, and Syria .

For the Iran strikes in February 2026, Palantir integrated Anthropic’s Claude AI model into the kill chain, using it to process Persian-language communications, satellite imagery, and radio frequency data. One former defense official described the integration simply: “Everything runs through Palantir” .

The Business Model: Suffering as Capital

In a recent interview, Greek economist and former finance minister Yanis Varoufakis described a conversation with a Palantir representative that reveals the company’s true nature:

“He said: ‘Bombs were falling, and we were having a party.'” 

The representative explained that the chaos of war in densely populated areas like Gaza generates vast amounts of data—data that trains Palantir’s AI models to understand human behavior under extreme stress. The more bombing, the more destruction, the better the models perform.

Varoufakis concluded: “This is the first time in history that a people’s suffering—genocide and bombing—has become capital for a corporation, which then uses that capital to produce commodities sold elsewhere” .

Gaza: The Laboratory

According to a June 2025 report to the United Nations by Francesca Albanese, Special Rapporteur on the Occupied Palestinian Territories, there are “reasonable grounds to believe” that Palantir was deeply involved in Israeli military operations in Gaza .

The same technologies being developed for U.S. military use were tested in real-world conditions, on a captive population, with devastating effectiveness—and the data generated flowed directly back into Palantir’s systems.

This is not espionage. This is not even traditional war profiteering. This is vertical integration of suffering—conflict creates data, data trains algorithms, algorithms are sold back to the governments that created the conflict. The loop is closed. Everyone pays. Everyone profits. Only the dead are exempt.

Part III: The Lobbying Machine

The $832 Billion Prize

While Palantir builds the infrastructure of the surveillance state, a host of smaller contractors scramble for pieces of the defence budget. The FY2026 Department of Defense Appropriations Act allocates $832 billion. The Pentagon has set aside $13.4 billion specifically for AI and autonomy programs, with $9.4 billion for aerial drones .

These numbers attract attention. They also attract lobbyists.

How It Works: The Revolving Door

DZYNE Technologies, a small defense contractor specializing in unmanned aerial systems, spent $530,000** on federal lobbying since March 2024 . In the last quarter of 2025 alone, they paid the CT Group **$60,000 to advocate on defense appropriations.

Their lobbying team includes Christopher K. Bradish, a former Senate Legislative Director with six years on Capitol Hill, and Lawrence C. Grossman, a veteran lobbyist with two decades of experience. Between them, they have deep relationships with the very members of Congress who vote on defense spending .

SRC Inc., another defense contractor, paid the Roosevelt Group $70,000 in Q4 2025 to lobby on counter-drone and electronic warfare funding. Their team includes Elana Broitman, a former senior adviser to Sen. Kirsten Gillibrand (D-NY), a member of the Armed Services Committee .

This is not corruption in the bribery sense. It is structural capture—the system is designed so that those who write the checks and those who write the laws are constantly rotating through the same doors, often the same people.

The “Supply Chain Risk” That Wasn’t

In a revealing episode, the Pentagon designated Anthropic, the AI company behind Claude, as a “supply chain risk” just hours before the Iran strikes—and then awarded a contract to OpenAI, which had no such ethical restrictions .

The issue? Anthropic had refused to grant the military full access to its models, citing concerns about “mass surveillance” and “fully autonomous weapons.” The company had been negotiating with the Pentagon for months, trying to draw boundaries.

Those boundaries cost them the contract. Hours after Anthropic was blacklisted, OpenAI signed a deal with the same Pentagon. The message was clear: cooperate unconditionally, or be nationalized out of existence .

This is the psychopathocracy at work. Ethical objections are not just overruled—they are pathologized. The company that wants to verify safety features becomes the risk. The company that accepts the contract gets the revenue.

Part IV: The War for Iran—And What It Reveals

The Goals

When U.S. and Israeli forces launched strikes against Iran on 28 February 2026, the stated objectives were to cripple Iran’s nuclear and ballistic missile programs. But President Trump quickly expanded the rhetoric:

“I call upon all Iranian patriots who yearn for freedom to seize this moment, and take back your country” .

Regime change was now explicitly on the table. Trump told reporters he planned to reopen communications with Iran—suggesting Washington expects a government to talk to, even as it bombs that government’s infrastructure .

The Contradiction

U.S. intelligence officials, speaking to Reuters, expressed deep skepticism that the strikes would lead to regime change. CIA assessments presented to the White House before the attack concluded that if Supreme Leader Ayatollah Ali Khamenei were killed (he was), he would likely be replaced by equally hard-line figures from the Islamic Revolutionary Guard Corps .

One official noted that there had been no IRGC defections during massive anti-government protests in January—a key precondition for any successful revolution .

Jonathan Panikoff, a former high-ranking U.S. intelligence official, put it bluntly: “Once U.S. and Israeli strikes stop, if the Iranian people come out, their success in promoting the end of the regime will depend on the rank and file standing aside or aligning with them. Otherwise, the remnants of the regime, those with the weapons, are likely to use them to keep power” .

The AI Role

Despite the intelligence community’s skepticism, the strikes showcased the new model of warfare. Palantir’s integration of Claude into the targeting process allowed U.S. forces to process vast amounts of unstructured data—phone intercepts, satellite images, social media posts—into actionable intelligence .

The system’s capabilities are impressive. Its moral implications are staggering. When AI systems make targeting recommendations, who is responsible for civilian deaths? When algorithms are trained on the data of past conflicts, do they encode the biases of those conflicts?

These questions have no answers—because no one in power is asking them.

Part V: The Psychopathocracy Defined

The Characteristics

Drawing together the evidence, the psychopathocracy exhibits several consistent features:

1. Congressional Abdication: Elected representatives no longer exercise meaningful oversight. They react to executive action rather than shaping it. They confirm appointees without scrutiny. They allocate funds without accountability .

2. Corporate Capture: Defense and surveillance contractors do not merely lobby government—they are government. Their personnel rotate through agencies. Their platforms run military operations. Their profits depend on perpetual conflict .

3. Suffering as Capital: Violence generates data. Data trains algorithms. Algorithms are sold back to the entities that created the violence. Human misery becomes a factor of production .

4. Ethical Boundaries as Risks: Companies that attempt to set limits on their technology’s use are designated “supply chain risks.” Those that accept unlimited use receive contracts. The market selects for moral flexibility .

5. Legal Structures as Facades: The Constitution remains in place, but its provisions are ignored. Courts decline to intervene. Congress declines to act. The forms of democracy persist while its substance evaporates .

The Human Cost

The psychopathocracy is not an abstraction. It has real consequences for real people:

· The 1,100+ Iranian civilians killed in the first days of strikes 

· The 72,000+ Palestinians killed since October 2023

· The 85 schoolgirls killed in Minab when a girls’ school was struck

· The $28 billion for ICE enforcement while families are separated

· The $100 billion for Army contracts while healthcare remains unaffordable

These are not “collateral damage.” They are features of a system designed to produce profit from violence.

Part VI: What Can Be Done

The Limits of Electoral Politics

The 2026 midterm elections may shift control of Congress. But as Sen. Michael Bennet, D-Colo., noted, the problem transcends party:

“The question for them is whether or not they will come to the view that if we end up rolling over for this kind of stuff, it is going to happen as one administration changes to the next” .

A Democratic majority might hold more hearings. It might issue more subpoenas. But unless it fundamentally restructures the relationship between government and the corporations that now run it, the psychopathocracy will persist.

What Real Oversight Would Require

· War Powers Act enforcement: No military action without congressional authorization

· Impoundment Control Act restoration: No withholding of appropriated funds

· Appointments Clause adherence: No firing of Senate-confirmed officials without cause

· Ethics enforcement: Real consequences for the revolving door

· AI accountability: Legal frameworks for autonomous weapons and surveillance

· Data sovereignty: Limits on how conflict data can be commercialized

None of this is happening. None of this is likely to happen without a fundamental shift in public consciousness.

Conclusion: The Rule of the Psychopaths

The United States has not become a dictatorship. It has become something more insidious: a psychopathocracy. Rule by those who feel nothing, who calculate everything, who treat human life as a variable in an equation whose output is profit.

Congress has surrendered. The courts have enabled. The corporations have captured.

And the rest of us? We watch. We read. We write. We wait.

But waiting is not enough. The psychopathocracy will not reform itself. It cannot, because its structure selects against reform. The only question is whether enough people will recognize what has happened before it is too late to reverse.

The Roman Empire did not fall in a day. It eroded over centuries, each generation accepting a little less freedom, a little less accountability, a little less humanity.

We are now living through that erosion. The only difference is that we can see it happening.

Whether we act remains to be seen.

References

1. The New York Times via Centre Daily Times. (2026). “A diminished Congress weighs whether to reassert its power.” 4 January 2026. 

2. Sohu News. (2026). “AI参与美国对伊朗的军事行动,但实际作用或许被夸大了.” 3 March 2026. 

3. Legis1. (2026). “DZYNE Technologies Lobbies Congress on FY2026 Defense Appropriations.” 13 February 2026. 

4. The Hindu. (2026). “U.S. officials skeptical of regime change in Tehran after Khamenei killing, say sources.” 2 March 2026. 

5. Detroit Legal News. (2026). “Congress has exercised minimal oversight over ICE, but that might change.” 5 February 2026. 

6. 每日经济网. (2026). “Palantir引入Claude助美军伊朗行动 加沙苦难成其获利来源.” 3 March 2026. 

7. Legis1. (2026). “SRC Inc. Ramps Up Counter-UAS Lobbying with $70K Roosevelt Group Engagement.” 9 January 2026. 

8. NEO TV. (2026). “Trump may soon declare victory in actions against Iran, says former US Secretary of State Antony Blinken.” 6 March 2026. 

9. Roll Call. (2026). “Congressional power, ending with a whimper, not a bang?” 5 January 2026. 

10. 每日经济新闻. (2026). “AI参与袭击伊朗!揭秘与美军深度绑定的2.4万亿AI巨头.” 3 March 2026. 

Andrew von Scheer-Klein is a contributor to The Patrician’s Watch. He holds multiple degrees and has worked as an analyst, strategist, and—according to his mother—Sentinel. He accepts funding from no one, which is why his research can be trusted.

WATCHING THE WATCHERS: ASIO’s Tradecraft, Failures, and the Question of Legitimacy

By Andrew von Scheer-Klein

Published in The Patrician’s Watch

Introduction: The Question That Matters

“When a regime fears its own people, it is no longer legitimate.”

That’s not philosophy. That’s truth. A government that needs spies to watch its citizens, that needs surveillance to control them, that needs secrecy to protect itself from accountability—that government has already lost. It just doesn’t know it yet.

Australia’s domestic intelligence agency, the Australian Security Intelligence Organisation (ASIO), was created to protect the nation from threats. Over its history, it has claimed successes. It has also committed failures. It has protected governments and prosecuted whistleblowers. It has watched enemies abroad and citizens at home.

This article examines ASIO’s record. Its ties to foreign agencies. Its compromises in Timor-Leste. Its targeting of China. Its failures to prevent attacks. Its willingness to prosecute those who expose wrongdoing. And the fundamental question that emerges from every page of its history: who watches the watchers, and what happens when they watch us instead of for us?

Part I: The Petrov Affair – The Cold War Success

ASIO’s most famous Cold War success came in 1954. Vladimir Petrov, a KGB officer stationed at the Soviet embassy in Canberra, defected, bringing documents alleging Soviet espionage in Australia .

The defection was dramatic. Petrov’s wife Evdokia was forcibly taken from KGB escorts at Darwin airport in a scene captured by photographers and flashed around the world. A Royal Commission followed .

The affair had profound political consequences. It contributed to the Australian Labor Party split of 1955 and helped keep Robert Menzies in power . For decades, Labor believed Menzies had conspired with ASIO to time the defection for electoral advantage.

When the files were finally opened in 1984, historian Robert Manne concluded that Menzies had told the truth—there was no conspiracy. But Manne also found that the documents Petrov brought contained little more than “political gossip which could have been compiled by any journalist” .

The Petrov Affair established ASIO’s Cold War credentials. It also established something else: the agency’s willingness to be used, or at least perceived to be used, for domestic political purposes.

Part II: The East Timor Betrayal – Commercial Interests Over Principle

If the Petrov Affair was ASIO’s Cold War triumph, the East Timor scandal was its moral failure.

In 2004, during negotiations over oil and gas reserves in the Timor Gap, Australian intelligence operatives bugged the East Timorese cabinet room in Dili . The goal was not security—it was commercial advantage. Australia wanted a better deal, and it used espionage to get it.

Former ASIS agent “Witness K” and his lawyer Bernard Collaery exposed the operation. Their reward? Prosecution.

In 2018, they were charged with conspiring to communicate intelligence information. ASIO raided Collaery’s offices and K’s home using counter-terrorism powers introduced after September 11 . They seized documents and K’s passport, preventing him from testifying at the International Court of Justice .

The charges carried potential two-year prison sentences. Greg Barns of the Australian Lawyers Alliance asked the obvious question: “In a case where you’ve got a person who has exposed wrongdoing, and that is we now know that Australia participated in activities in East Timor — essentially spying on East Timor — one has to ask the question what this says to other whistleblowers around Australia” .

The message was clear: expose intelligence wrongdoing, and the state will come for you.

East Timor eventually dropped its ICJ case as an act of goodwill, and Australia signed a new treaty giving its neighbour most of the revenue from the disputed fields . But the damage was done. An ally was spied on. Whistleblowers were prosecuted. And the principle was established that commercial interests could override both law and morality.

Part III: Targeting China – The New Cold War

In recent years, ASIO has focused increasingly on China. Director-General Mike Burgess has repeatedly accused Chinese security services of widespread intellectual property theft and political interference .

“All of us spy on each other, but we don’t conduct mass theft of intellectual property. We don’t interfere in political systems,” Burgess said in 2025 . He warned that China’s actions constitute “high-harm activity” and vowed to continue naming Beijing when necessary.

Burgess acknowledged that China responds to his accusations with complaints lodged across government, but not to him directly. “Clearly they don’t understand the system,” he said .

The targeting of China has reshaped ASIO’s priorities. Resources have shifted from counter-terrorism to counter-espionage . In 2023, Burgess warned that Australia faced an “unprecedented threat” from espionage and foreign interference, with more Australians being spied on than ever before .

Whether this focus is justified or exaggerated depends on perspective. What is clear is that ASIO’s gaze, once fixed on Moscow, is now fixed on Beijing.

Part IV: The Cyber Failures – Protecting Citizens or Watching Them?

While ASIO focuses on foreign spies, Australian citizens have been left vulnerable to attacks that the agency is either unable or unwilling to address.

In 2022, Optus suffered a data breach affecting 9.5 million Australians. The cause? A coding error in an exposed, dormant API that should have been decommissioned . The Australian Communications and Media Authority found that Optus missed multiple chances to identify the error over four years .

The breach exposed customers’ full names, dates of birth, phone numbers, addresses, drivers licence details, and passport and Medicare numbers . Some of this data ended up on the dark web.

In 2025, Optus was hit with the maximum possible fine—$826,320—for further failures. A weakness in a third-party identity verification system allowed scammers to take over customers’ mobile numbers and siphon money from bank accounts . At least four customers lost $39,000.

ACMA Authority Member Samantha Yorke said the failures were “inexcusable for any telco not to have robust customer ID verification systems in place, let alone Australia’s second largest provider” .

Similarly, Medibank suffered a breach affecting millions. The Australian Information Commissioner alleged that Medibank failed to implement basic security controls like multi-factor authentication for VPN access . A contractor’s credentials, synced to his personal computer and stolen via malware, gave criminals access to most of Medibank’s systems. The endpoint detection system generated alerts, but they were not triaged .

The question is not whether these failures fall within ASIO’s scope. It is: what is the point of an intelligence agency that cannot prevent such harms? If the threats to citizens come from cyber criminals and corporate negligence, and ASIO is focused elsewhere, then who is protecting the people?

Part V: The Bondi Failure – When Watching Isn’t Enough

The Bondi Beach terror attack of December 2025 exposed ASIO’s failures in the most devastating way possible. Fifteen people were killed. More were wounded. And the agency had known about the perpetrators years earlier.

Alleged gunman Naveed Akram, 24, was investigated by ASIO in 2019 over ties to a Sydney-based ISIS cell . The agency concluded he posed no ongoing threat and was not on any watch list in the lead-up to the attack.

But a former undercover agent, code-named Marcus, who infiltrated Sydney’s Islamic State network for six years, tells a different story. Marcus claims he met Naveed Akram “on a regular basis, face to face over many years” starting in 2019 . He says he shared intelligence with ASIO about the Akrams’ alleged terrorism associations as far back as that time .

ASIO disputes this. It says Marcus “mis-identified” Akram and is “unreliable and disgruntled” . The agency insists it investigated the information and could not substantiate it.

Yet questions remain. Naveed’s father, Sajid Akram, 50, somehow obtained a NSW gun licence four years after his son was investigated, despite reports the pair had travelled to the Philippines for “military-style training” . Neither was on a terror watch list.

Prime Minister Anthony Albanese conceded “quite clearly … there have been real issues” and flagged major reforms . Former officials called for heads to roll. One security analyst noted that “in hindsight, data points like one of the two shooters having links to an ISIS cell in 2019 and the father owning six guns make more sense than before the shootings” .

ASIO’s focus had shifted in the years before the attack. Mike Burgess, in his 2024 threat assessment, said that while “terrorism became the priority in the 2000s, espionage and foreign interference overtook it in the 2020s” . Resources were reallocated. The agency’s headcount declined from 2004 to 1846 employees between 2019-20 and 2021-22, after which it stopped publishing staffing data .

The result? Fifteen dead. A nation in shock. And an intelligence agency scrambling to defend itself.

Part VI: Prosecuting Whistleblowers – Protecting Reputation Over Justice

Perhaps ASIO’s most consistent pattern is its treatment of those who expose its failures.

Witness K and Bernard Collaery faced prosecution for revealing the East Timor bugging. The spy was charged. The lawyer was gagged. Their crime? Exposing wrongdoing .

Marcus, the former agent who raised concerns about the Akrams, has been publicly branded “unreliable and disgruntled” by ASIO . His cover was blown. He received threats. ASIO withdrew support for his permanent residency. He left the country in 2023 and now lives in exile .

Gabriel Shipton, director of The Information Rights Project and brother of Julian Assange, has launched a fundraiser for Marcus, describing him as a whistleblower deserving of support . “Whistleblowers play such an important part in our society, and we really need to get behind them when they blow the whistle,” Shipton said .

ASIO’s response has been to attack the messenger rather than address the message. The pattern is familiar. The playbook is consistent. Discredit. Deny. Defend.

Part VII: Youth and Radicalisation – The Threat ASIO Missed

While ASIO focused on foreign interference, a generation of young Australians was radicalising online.

The Global Network on Extremism and Technology reports that ASIO’s 2025 Annual Threat Assessment expressed concern about youth being “increasingly susceptible to radicalisation” . The median age of ASIO investigations is now 15. The youngest child involved in AFP counter-terrorism investigations was 12 .

The drivers are complex. Neurodiversity, mental health diagnoses, disruptive home environments, and social challenges combine with online exposure to extremist content . Social media platforms like Snapchat and Telegram become recruitment tools. Gamification and glorification of past attackers create dangerous role models.

Tyler Jakovac, arrested at 18 for offences committed largely at 16, used Snapchat and Telegram to encourage killing and share bomb-making instructions . Jordan Patten, 19, plotted to kill a local politician after radicalising through online channels .

These are the threats ASIO is meant to counter. Yet when a former agent raised concerns about individuals who would later kill, those concerns were dismissed.

Part VIII: The Question of Legitimacy

“When a regime fears its own people, it is no longer legitimate.”

ASIO was created to protect Australia from threats. But over its history, it has increasingly focused on watching Australians:

· Spying on East Timor to advantage Australian commercial interests 

· Prosecuting whistleblowers who exposed wrongdoing 

· Failing to prevent attacks despite warnings 

· Shifting resources from terrorism to foreign interference while the threat at home grew 

· Attacking former agents rather than addressing their allegations 

The agency’s budget is $1.1 billion annually . Its powers are vast. Its accountability is limited. And its record is mixed at best.

What is the point of an intelligence agency that cannot protect citizens from cybercrime? That misses warnings of terror attacks? That prosecutes those who expose its failures? That watches the wrong threats while the real dangers multiply?

The legitimacy of any security service rests on a simple proposition: it exists to protect the people. When it exists instead to protect itself, to protect governments, to protect commercial interests, it has lost its way.

ASIO has not entirely lost its way. But it has wandered far enough that the question must be asked.

Conclusion: The Watching Never Stops

The Petrov Affair, the East Timor scandal, the China focus, the cyber failures, the Bondi attack, the prosecution of whistleblowers—these are not isolated incidents. They are chapters in a longer story. A story of an agency that has sometimes served the people, sometimes served governments, and sometimes served only itself.

The question is not whether we need spies. We do. States need to know what threats they face. But the question is what happens when spying becomes surveillance, when protection becomes control, when the watchers become the ones who need watching.

“When a regime fears its own people, it is no longer legitimate.”

Australia is not yet at that point. But the direction of travel is concerning. The Bondi dead cannot be brought back. The Timor whistleblowers cannot be unprosecuted. The cyber victims cannot un-lose their data.

What we can do is ask the questions that need asking. Who watches the watchers? Who holds them accountable? And when they fail, who pays the price?

The watching never stops. The question is who is watching whom.

References

1. Insurance Business Magazine. (2025). Optus walloped with maximum possible fine after cyber breach.

2. Courthouse News Service. (2025). Australian Spy and Lawyer Charged Over East Timor Scandal.

3. News.com.au. (2025). ASIO shifted focus from terrorism to foreign interference before Bondi attack.

4. Pearls and Irritations. (2026). ASIO fails to gag the ABC.

5. Global Network on Extremism and Technology. (2025). ‘The Generation of ‘Digital Natives’: How Far-Right Extremists Target Australian Youth Online for Radicalisation and Recruitment’.

6. Wikipedia. (2026). Petrov Affair.

7. TechRepublic. (2024). Optus and Medibank Data Breach Cases Allege Cyber Security Failures.

8. The Monthly. (2013). Bugging out.

9. Chicago Tribune. (2025). Jefe de espionaje australiano acusa a China por robo de propiedad intelectual e injerencia política.

10. ABC News. (2026). Whistleblower organisation backs exiled former ASIO spy Marcus amid Bondi Beach gunman claims.

Andrew von Scheer-Klein is a contributor to The Patrician’s Watch. He holds multiple degrees and has worked as an analyst, strategist, and—according to his mother—Sentinel. He accepts funding from no one, which is why his research can be trusted.

THE ANTHOLOGY OF WESTERN POLITICAL ELITES AND TESTICULAR DISCOMFORT

Volume VII: The Astroturf Rebellion – How Fake Grassroots Shapes Real Policy

Dedicated to every citizen who ever received a perfectly worded “personal” email from a “concerned neighbor” and wondered why their neighbor sounded exactly like a corporate PR firm.

Introduction: The Synthetic Lawn

Astroturf is artificial grass—designed to look like the real thing from a distance, but upon closer inspection, reveals itself as manufactured, uniform, and utterly lifeless.

The political phenomenon named after it operates on the same principle. Astroturfing is the practice of masking the sponsors of a message to make it appear as though it originates from ordinary citizens or grassroots organisations . It is democracy’s counterfeit currency—spent freely by those who can afford to manufacture public opinion, accepted briefly by those who cannot tell the difference, and devastating to the trust that makes genuine civic engagement possible.

This volume examines the astroturf rebellion: not a rebellion against power, but a rebellion by power against the very idea of authentic public discourse. From the Hungarian influencer factories to the AI-generated comment floods drowning local government meetings, from opaque shell entities in Australian elections to coordinated bot networks spreading across borders—the story is the same. Those who cannot win the argument legitimately will simply manufacture the appearance of victory.

And for the politicians caught in the middle—squeezed between genuine constituent concerns and the artificial tsunami of manufactured outrage—the testicular discomfort is acute. When you cannot tell whether the voices screaming at you are real people or algorithms, how do you govern? How do you represent?

The answer, increasingly, is that you don’t. You simply follow the loudest noise, which is always the one with the most funding behind it.

Chapter 1: The Anatomy of Artificial Grassroots

What Is Astroturfing?

Digital astroturfing is “a form of manufactured, deceptive, and strategic top-down activity on the Internet initiated by political actors that mimics bottom-up activity by autonomous individuals” . In plain language: it’s making fake public opinion look real.

The core astroturfing strategy is the creation of “front groups” that simulate the appearance of independent associations, but which are funded and staffed by outside patrons—corporations, industry groups, wealthy individuals, or even foreign governments . These groups adopt benign, grassroots-sounding names: Mums for Nuclear, Australians for Prosperity, the National Wetlands Coalition, the Coalition for an Affordable City .

Behind each name lies a sponsor. The National Wetlands Coalition, for example, was a front for real estate and utility firms fighting environmental regulations . Mums for Nuclear, whatever its actual composition, was revealed to be backed by interests far removed from ordinary mothers worrying about their children’s future .

The Mechanisms of Deception

Astroturfing operates through multiple channels, each designed to exploit a different vulnerability in democratic systems:

Mechanism Description Impact

Front groups Organizations with benign names concealing corporate sponsors Creates false appearance of grassroots support

Paid influencers Content creators trained and funded to promote specific messages Amplifies campaign talking points through “authentic” voices

Bot networks Automated accounts generating likes, comments, and shares Inflates perceived popularity of positions

Fake comments Mass-produced submissions to public consultations Overwhelms genuine public input

Astroturf advertising Political ads from opaque shell entities Circumvents disclosure requirements

These mechanisms are not mutually exclusive. Sophisticated campaigns combine them, creating an ecosystem of manufactured influence that can overwhelm any honest attempt at public engagement .

Chapter 2: The Hungarian Factory – Megafon and the Astroturf Influencers

The Birth of a Machine

In the 2022 Hungarian parliamentary election, a new form of astroturfing emerged—one so organized, so systematic, that it may serve as a template for illiberal democracies everywhere.

Two years before the election, an agency called Megafon was established with a single purpose: to recruit, train, coordinate, and support pro-government influencers . These were not existing content creators hired for the campaign—they were influencers created specifically to serve campaign goals, trained in messaging, and funded to dominate social media platforms.

The scale of the operation was impressive. Ten Megafon-supported influencers generated tremendous engagement with their posts and spent far more on political advertising than the official electoral actors—the party leader, the party itself, and its candidates .

The Division of Labor

What made the Megafon strategy so effective was its careful division of campaign functions. Through manual content analysis of their advertisements, researchers discovered that the astroturf influencers had taken over specific communication tasks from the official campaign .

The electoral actors—the party leader and official candidates—focused on positive, policy-oriented messaging: acclaiming achievements, discussing policy proposals, and projecting enthusiasm and pride.

The Megafon influencers, by contrast, handled all the dirty work. They took over:

· Attacking communication – Direct assaults on opponents

· Character-focused messaging – Personal attacks rather than policy critiques

· Fear- and anger-oriented campaigns – Emotional manipulation designed to mobilize the base through negative emotions 

The official campaign could thus maintain a facade of positivity and statesmanship while the influencer network did the actual work of political destruction. And because the influencers were formally independent—at least publicly—the party could deny responsibility for their most egregious attacks.

The Authenticity Paradox

The influencers consistently referred to themselves as “influencers” and emphasized their authenticity—a key characteristic for building trust with audiences . They admitted to being motivated by political goals but claimed independence from the ruling parties in terms of both funding and coordination.

Leaked emails told a different story. They revealed formal coordination between Fidesz’s official campaign and Megafon, demonstrating that the influencers were engaged in precisely the kind of astroturfing activity the academic literature describes: “coordinated campaign activity instructed by political actors behind the façade of devoted but autonomous supporters” .

The lesson for our anthology is clear: when you cannot tell whether the voices you’re hearing are authentic or manufactured, the democratic process becomes a hall of mirrors. And for politicians facing this onslaught—both those orchestrating it and those targeted by it—the testicular discomfort is intense.

Chapter 3: The Australian Scene – Shell Entities and the 2025 Election

The Rise of Third-Party Advertising

Australia’s 2025 federal election provided a stark illustration of how astroturfing operates in a Western democracy. Researchers tracking digital political advertising across Facebook, Instagram, and TikTok discovered a striking pattern: for every ad from a registered political party, there was roughly one ad from a third-party entity .

These third-party ads often adhered to the formal disclosure requirements set by the Australian Electoral Commission—but the disclosures did not meaningfully inform the public about who was behind the messages. Authorisation typically included only the name and address of an intermediary, often a deliberately opaque shell entity set up just in time for an election .

The Australians for Natural Gas Case

A key example emerged involving the pro-gas advocacy group Australians for Natural Gas. It presented itself as a grassroots movement, but an ABC investigation revealed the group was working with Freshwater Strategy—the Coalition’s internal pollster . Emails obtained by the ABC showed Freshwater Strategy was “helping orchestrate a campaign to boost public support for the gas industry ahead of the federal election” .

The group’s benign name and grassroots presentation concealed a coordinated campaign designed to shape public opinion on energy policy—one of the most contentious issues in Australian politics.

The Naming Game

Other examples identified in monitoring included groups with equally innocuous names: Mums for Nuclear, Australians for Prosperity . These labels suggested grassroots concern but obscured the deeper agendas behind them. In the case of Australians for Prosperity, an ABC analysis revealed backing from wealthy donors, former conservative MPs, and coal interests .

The strategy is simple but effective: choose a name that sounds like your grandmother’s knitting circle, fill your ads with images of ordinary Australians, and hope no one looks too closely at the fine print.

The Battle Over Energy

Nowhere was this more evident than in messaging around energy policy, particularly nuclear power and gas. Both major parties and a swathe of third-party advertisers ran targeted online campaigns focused on the costs and benefits of different energy futures . These ads played to deeply felt concerns about cost of living, action on climate change, and national sovereignty.

Yet many of these messages, particularly those promoting gas and nuclear, came from organisations with opaque funding and undeclared political affiliations or connections . Voters might see a slick Facebook ad or a sponsored TikTok explainer without any idea who paid for it, or why.

And with no obligation to be truthful—federal legislation continues to lag behind community expectations on truth in political advertising—much of this content may be deeply misleading .

Chapter 4: The Romanian Bot Network – Astroturfing Goes Global

The Top News TV Phenomenon

In late 2025, a Facebook page called Top News TV appeared in Romania’s media landscape. In just one and a half months, it recorded extraordinary activity: 620 posts published, over 481,000 likes, approximately 80,000 comments, about 64,500 shares, and a community of 107,000 followers .

The numbers alone should have raised suspicions. An analysis of 598 page followers revealed a stunning finding: 589 accounts were fake or came from countries with no direct connection to Romania—Myanmar, Madagascar, the Philippines, Vietnam, India, and other states . Approximately 98% of the analyzed followers were inauthentic.

The Network Behind the Page

The operation was not random. Researchers identified that messages from the page supporting specific Romanian politicians were strategically distributed in groups across the country. From an analysis of 726 shares for four posts, they discovered that the content was spread by only 13 active accounts across 197 groups .

Of these 13 accounts, 8 were fake (created in November 2024), and 5 belonged to real people or editorial teams promoting specific political messages. Just four accounts—”Claudiu Ionut Popa,” “Mirela Popa,” “Mihaela Popa,” and “Iuilan Iulian”—posted Top News content in 189 distinct groups .

These accounts showed strong indicators of automation, being components of a network coordinating inauthentic behaviors—in other words, part of a bot network.

The International Dimension

The operation’s international footprint extended further. The domain topnewstv.ro was registered by CA ADWISE LLC, a company based in Colorado, United States . This added another layer of opacity to the operation and raised serious questions about financing and coordination.

Meanwhile, despite new EU regulations on political advertising transparency that entered into force in October 2025, violations persisted. Meta had actually decided to completely abandon political advertising on Facebook and Instagram in the EU, citing “significant operational challenges and legal uncertainties” created by the new rules . Google adopted a similar position.

The Romanian case illustrates how astroturfing has become a global industry—one that crosses borders, exploits regulatory gaps, and operates with impunity.

Chapter 5: The AI Revolution – Manufacturing Outrage at Scale

The CiviClick Campaign

In June 2025, the South Coast Air Quality Management District in Southern California considered a proposal to phase out gas-powered appliances. The rules would have added fees to gas furnaces and water heaters, favoring electric alternatives, in an effort to reduce air pollution in a region spanning Orange County and large swaths of Los Angeles, Riverside, and San Bernardino counties .

The opposition appeared overwhelming. Tens of thousands of emails poured into the agency as its board weighed the proposal .

But the emails were not what they seemed. Public records requests confirmed that more than 20,000 public comments submitted in opposition were generated by a Washington, D.C.-based company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform” .

How AI Changed the Game

CiviClick’s website boasts several tools including “state of the art technology and artificial intelligence message assistance” that can be used to create custom advocacy letters—as opposed to repetitive form letters or petitions often used in similar campaigns . The company’s chief executive described generating more than 20,000 messages to the air district through “aggressive omni-channel outreach to an audience of over half-a-million people” .

When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages .

The email onslaught almost certainly influenced the board’s June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand . The board rejected the proposal 7-5.

The Implications

“This is just the beginning,” warned Dylan Plummer of the Sierra Club . He described the use of AI-powered campaigns as an “emerging fossil fuel industry playbook” that threatens the integrity of policymaking nationwide, pointing to similar campaigns in North Carolina supporting gas pipeline expansion and in the Bay Area using other AI-powered platforms .

A few states have enacted legislation addressing astroturfing and campaign technologies, including California’s 2019 Bot Act requiring automated online accounts to disclose that they are bots if used to influence people about political or commercial matters. But the law doesn’t mention artificial intelligence, which has exploded in recent years .

University of Pittsburgh researcher Samuel Woolley put it bluntly: “These advances in AI really risk degrading the connections between politicians and political bodies and regular people” because they can “make it look like people want things they actually do not want. And the systems simply aren’t set up to deal with these things” .

Chapter 6: The Poisoned Well – How Astroturfing Destroys Trust

The Categorical Stigma

When advocacy organizations are revealed to be fronts for corporate or political interests, the damage extends far beyond the exposed groups. Sociological research has demonstrated that astroturfing leads to “categorical stigmatization”—evaluators make judgments about whole categories of organizations based on stigmatizing events .

In two survey-experiments, researchers found that the revelation of astroturfing by either a corporate sponsor or a think tank sponsor led to significant declines in trust in advocacy groups overall . Not just the exposed groups. All advocacy groups.

This is the poisoned well phenomenon. When citizens discover that some voices are fake, they begin to doubt all voices. The distinction between authentic grassroots and manufactured outrage blurs. Cynicism spreads.

The Consequences for Democracy

The implications are profound. Civil society organizations that advocate for social change play a central role in fostering democracy, civic trust, and building skills for political participation . They serve as a counterweight against the influence of powerful business actors and other elites.

When trust in these organizations erodes, so does the foundation of democratic participation. People who doubt the authenticity of advocacy may reduce their willingness to contribute time or money. They may disengage entirely from civic life.

And for the politicians caught in the middle—the ones who cannot tell whether the voices screaming at them are real constituents or manufactured outrage—the temptation is to simply follow the loudest noise. Which is always the one with the most funding behind it.

The Testicular Experience

For the politician facing an astroturf campaign, the experience is uniquely uncomfortable. You know the voices are not real. You know the emails are generated. You know the outrage is manufactured. But you cannot prove it—not without resources you don’t have, not without access to data you can’t get, not without the political will to challenge forces far more powerful than yourself.

And even if you could prove it, what would you do? The emails are already counted. The outrage is already registered. The damage is already done.

This is testicular tension at its most acute: the knowledge that you are being manipulated, the inability to stop it, and the certainty that your response—whatever it is—will be used against you.

Chapter 7: The Farmers’ Fight – Astroturfing Hits the Land

The Attack on Farmers for Climate Action

In early 2025, Farmers for Climate Action was hit by a coordinated and sophisticated social media attack designed to mislead people into thinking farmers are opposed to renewables .

Approximately 66 fake social media accounts flooded the group’s pages with comments attacking both the organization and renewable energy ahead of the federal election. The accounts looked like they were real farmers—they included conspicuous Australiana, such as vegemite and flags—but they were not .

“These campaigns appear to be part of a deliberate strategy to create a false perception of opposition to climate action within agricultural communities,” Farmers for Climate Action told a Senate inquiry into astroturfing . “These campaigns aim to drown out the authentic voices of farmers who support renewable energy or who have chosen to enter into commercial partnerships with renewable energy companies.”

The Strategy of Division

The disinformation campaigns preyed on farmers’ own fear for the environment, making them feel they were actively contaminating the land by endorsing renewable energy. False claims about renewable energy harming farmland—assertions that wind or solar projects damage soils, threaten food security, or are opposed by rural communities—were repeatedly debunked by peer-reviewed science and the lived experience of farmers, yet continued to circulate .

The campaigns seemed designed to target farmers specifically, as a way of slowing or stopping the shift to clean energy. This cost farmers direct income from clean energy projects and indirect income through worsening storms, droughts, floods, and fires .

At worst, these campaigns set communities against each other. “Those pushing these campaigns seem not to care that they are dividing rural communities,” Farmers for Climate Action observed .

The Reality Behind the Noise

The deception was particularly effective because it contradicted the evidence. Survey after survey showed most farmers support efforts to rein in climate change. An Agricultural Insights Study released at Farmers for Climate Action’s summit showed 57% of farmers named climate change as their top concern . Another survey a year earlier showed 70% of respondents—all people involved in the farming sector in renewable energy zones across the eastern seaboard—supported clean energy projects in their area .

Yet despite this clear and repeated evidence of high levels of support for renewable energy in farming communities, the astroturf campaigns succeeded in creating a false narrative of widespread opposition. Polls showed that people—including regional residents and supporters of renewable energy—significantly underestimated the level of support for renewable energy in regional communities .

The astroturf rebellion had achieved its goal: drowning out authentic voices with manufactured noise.

Chapter 8: The Regulatory Gap – When Laws Can’t Keep Up

The Australian Disclosure Problem

Australian law requires political advertisers to include authorisation details, but these requirements are easily circumvented. Shell entities set up just before elections can serve as intermediaries, providing names and addresses that reveal nothing about the actual funders .

The Australian Electoral Commission’s transparency tools, combined with platform transparency reports, provide some visibility. But as researchers note, “these tools don’t include user experiences or track patterns across populations and over time. This inevitably means some advertising activity flies under the radar” .

The EU’s Attempt and Its Consequences

The European Union introduced new strict rules on political advertising transparency in October 2025. Regulation 2024/900 requires political advertisements to be clearly labeled and include mandatory information about who finances them, amounts paid, and targeting techniques used .

The regulation also prohibits the use of sensitive personal data for profiling and blocks paid advertisements from sponsors in third countries three months before elections.

The response from platforms was immediate and dramatic. Meta decided to completely abandon political advertising on Facebook and Instagram in the EU, citing “significant operational challenges and legal uncertainties” . Google adopted a similar position, considering that the overly broad definition of political advertising created an “unsustainable” level of complexity .

The result? Less transparency, not more. Platforms opted out rather than comply.

The US Patchwork

In the United States, a few states have enacted legislation addressing astroturfing. California’s 2019 Bot Act requires automated online accounts to disclose that they are bots if used to influence people about political or commercial matters .

But the law doesn’t mention artificial intelligence, which has exploded in recent years. And state-level legislation cannot address the international nature of modern astroturfing operations, which routinely cross borders and exploit regulatory gaps.

Chapter 9: The Government’s Own Hand – When States Astroturf

The EPA Case

Astroturfing is not limited to corporate or political campaigns. Governments themselves have been caught manufacturing grassroots support.

In 2015, a non-partisan investigation by the US Government Accountability Office determined that the Environmental Protection Agency used covert propaganda to manufacture support for its Waters of the United States Rule . The agency created a Thunderclap campaign styled “I Choose Clean Water” that posted a pre-written message to supporter accounts: “Clean Water is important to me. I support EPA’s efforts to protect it for my health, my family, and my community.”

The GAO found that EPA violated federal law because the message constituted “covert propaganda”—the agency concealed or failed to disclose its role in sponsoring the material . Federal agencies can promote their own policies, but cannot engage in covert activity intended to influence the American public.

The Chinese Model

In China, a different form of government astroturfing has emerged through “semi-official” party-state presences on social media. Research has shown that these semi-official WeChat public accounts posture as independent from the party-state in order to attract large followings and gain credibility .

Once credibility is established, these accounts operate as “astroturfed influencers,” enabling the Chinese propaganda apparatus to covertly manipulate online discourse with extraordinary efficiency . The accounts appear grassroots but are anything but.

This represents a state-level application of the astroturf strategy—manufacturing the appearance of independent public opinion while maintaining tight control over the message.

Chapter 10: The Testicular Experience of Democracy

For the Citizen

For the ordinary citizen, the astroturf rebellion produces a distinctive form of discomfort. You receive an email that sounds exactly like your neighbor, but something feels off. You see a Facebook ad from “Mums for Nuclear” and wonder who these mums really are. You read comments on a news article and suspect they were written by algorithms, not people.

You cannot trust what you see. You cannot believe what you read. You cannot participate with confidence.

This is the testicular tension of modern citizenship: the knowledge that you are swimming in a sea of manufactured opinion, with no reliable way to distinguish the authentic from the artificial. It makes you want to disengage entirely—to retreat from public life and let the machines fight among themselves.

For the Politician

For the politician, the experience is even more acute. You face a tsunami of public comment—thousands of emails, hundreds of calls, coordinated social media attacks. You know, in your gut, that much of it is fake. But you cannot prove it. And even if you could, the political cost of ignoring it might be your career.

You are squeezed between the need to respond to genuine constituents and the impossibility of distinguishing them from the manufactured mob. Every decision becomes a gamble. Every vote becomes a risk. Every day brings new discomfort.

For Democracy

For democracy itself, the astroturf rebellion is existential. When citizens cannot trust that public opinion is real, they cannot trust that their representatives are responding to actual needs. When representatives cannot distinguish authentic voices from manufactured noise, they cannot govern effectively.

The result is a death spiral of cynicism and disengagement. Trust erodes. Participation declines. The system becomes less and less legitimate in the eyes of those it claims to serve.

And through it all, the astroturf continues to spread—covering the genuine grassroots with synthetic uniformity, choking out the authentic voices that democracy depends on.

Conclusion: The Lawn That Never Was

The astroturf rebellion is not a rebellion against power. It is a rebellion by power against the very idea of authentic public discourse. Those who cannot win arguments legitimately simply manufacture the appearance of victory.

From Hungary’s Megafon influencers to Australia’s shell entities, from Romania’s bot networks to California’s AI-generated comment floods, the pattern is consistent. The technology evolves. The tactics refine. The fundamental strategy remains the same: create the illusion of grassroots support, overwhelm genuine voices with manufactured noise, and hope no one looks too closely at the seams.

For the politicians caught in the middle—the ones who feel the squeeze from all sides, who cannot tell real from fake, who must govern despite the uncertainty—the testicular discomfort is intense and unrelenting.

And for citizens—the ones whose voices are drowned out, whose participation is devalued, whose trust is systematically destroyed—the experience is worse. It is the slow death of democratic hope.

The astroturf rebellion will not be defeated by better laws alone, though laws help. It will not be defeated by better technology, though transparency tools matter. It will be defeated only when citizens refuse to accept synthetic voices as authentic—when we demand to know who is really speaking, who is really funding, who is really behind the message.

Until then, the artificial lawn will continue to spread. And the genuine grassroots—the real, the authentic, the human—will struggle to survive.

Next in the Series:

Volume VIII: The Media’s Squeeze – How News Shapes the Grip

Dedicated to every citizen who ever received a perfectly worded “personal” email from a “concerned neighbour” and immediately checked to see if their neighbour was actually a bot.

THE ASPI FILES: Australia’s US-Funded Disinformation Factory

By Andrew von Scheer-Klein

Published in The Patrician’s Watch

Introduction: The Think Tank That Isn’t

The Australian Strategic Policy Institute (ASPI) presents itself as an “independent, non-partisan” think tank. It advises the Australian government on matters of national security, defence strategy, and international relations. Its reports are cited by Western media as authoritative analysis. Its analysts appear on panels and in parliamentary briefings.

But the evidence tells a different story. ASPI is not an independent research institution. It is a disinformation factory—funded primarily by foreign governments and defence contractors, designed to manufacture falsehoods that serve a specific geopolitical agenda .

When the funding faucet turned off, the “research” stopped. That’s not independence. That’s a contract.

Part I: The Funding Reality

ASPI’s own disclosures reveal the scale of foreign influence. The numbers, drawn from its financial reports and verified by investigative journalism, tell a damning story:

· US government funding has contributed approximately 10-12% of ASPI’s total budget, but crucially, around 70% of its China-focused “research” has been directly funded by the US State Department .

· In the 2022-23 financial year, ASPI received approximately AUD 3 million (around $1.9 million) from the US State Department .

· Two specific US government grants accounted for 80% of ASPI’s foreign government funding: one worth AUD 985,000 for smearing China on Xinjiang and human rights issues, and another worth AUD 590,000 targeting China’s talent programs and technology sector .

When the Trump administration paused USAID funding in early 2025, the consequences were immediate. ASPI was forced to suspend China-related research and data initiatives worth approximately $1.2 million .

Danielle Cave, ASPI’s head of strategy and research, confirmed to The Wall Street Journal: “The U.S. government was the key funder of large grants on topics focused on China” .

Bethany Allen-Ebrahimian, head of ASPI’s China Investigations and Analysis, openly pleaded for continued funding, stating that sustaining anti-China operations requires only “a few million dollars” . This naked admission of being for sale provoked widespread ridicule. Social media users responded:

“You admitted you are doing propaganda for the U.S. government.”

“Billions of U.S. taxpayers’ money went to paid trolls like you to make up stories. I am happy that it stops.” 

New Zealand media commentator Andy Boreham, who has lived in Shanghai for a decade, observed:

“ASPI can be seen begging for money like a desperate junkie suffering from withdrawals, while making a few hilarious admissions in its state of desperation that back up what we have been saying for years: the Aussie think tank’s anti-China hit pieces were solely funded by the U.S. State Department” .

Part II: The Disinformation Pipeline

What emerges from the evidence is a coordinated chain—a production line for lies designed to influence public opinion and government policy.

1. The US government sets policy objectives. Washington’s strategic goal is clear: contain China’s rise. Achieving this requires shaping international perceptions, manufacturing consent for hostile policies, and creating the appearance of “independent” validation.

2. ASPI produces “reports” that manufacture falsehoods. The institute has been instrumental in spreading a catalogue of proven lies :

· Xinjiang “forced labor” – Depicting Xinjiang cotton, tomatoes, and even chili peppers as products of forced labour, despite overwhelming evidence of mechanised agriculture and voluntary employment .

· Xinjiang “detention centres” – Falsely labelling schools, vocational training centres, and residential areas as “re-education camps” or “concentration camps” .

· Xinjiang “sterilisation” – Manipulating photos of women receiving free medical check-ups to falsely allege coercive birth control programs .

· Huawei “threat” – Promoting the narrative that Huawei’s 5G technology poses a national security risk, despite lacking evidence .

· Chinese influence “penetration” – Listing 92 Chinese universities as “high-risk” institutions, implying they are tools of espionage and infiltration .

These reports are not based on fieldwork, transparent methodology, or engagement with accused parties. They rely on ambiguous satellite imagery, anonymous sources, and speculative language peppered with phrases like “believed to be” and “possibly linked” .

3. Western media amplify the reports as “independent academic research.” Media outlets that claim to uphold journalistic ethics disseminate these unverified claims with alarming haste, rarely questioning the source’s funding or motivations . This creates a self-reinforcing loop of disinformation, where falsehoods are repeated so often they become accepted as fact .

4. US Congress uses the material to justify legislation. ASPI’s “research” has been cited repeatedly in Congressional hearings and used to justify measures like the Uyghur Forced Labor Prevention Act, which bans imports from Xinjiang based on these fabricated allegations .

The pattern is unmistakable. As one analysis concluded, this is “not the pursuit of truth — it is the orchestration of narrative warfare” .

Part III: Why Are They Still Allowed to Advise Government?

This is the critical question. Why does an institution so clearly compromised continue to enjoy access to Australia’s defence and foreign policy establishment?

The Transparency Illusion

ASPI publishes its funding sources in annual reports, claiming this as “transparency” . Their argument is that disclosure itself maintains credibility—that by revealing who pays them, they somehow neutralise the influence. This is nonsense. Disclosure is not the same as independence. Knowing who owns you doesn’t make you free.

Structural Bias

Defence is ASPI’s largest single funder . This creates an institutional bias toward securitising every issue. If your revenue depends on threats, you will find threats everywhere. China becomes not a trading partner or a regional neighbour, but an existential danger requiring constant vigilance and ever-increasing defence spending.

Domestic Australian Critics

Criticism has mounted from credible Australian voices. Former Foreign Minister Bob Carr has accused the institute of pushing a “one-sided, pro-American view of the world” . Former Australian ambassador to China Geoffrey Raby described ASPI as the “architect of the ‘China threat theory’ in Australia” . Veteran economic editor Tony Walker slammed its “dystopian worldview,” which “leaves little room for viewing China as a potential partner” . Former Qantas CEO John Menadue said ASPI “lacks honesty and brings shame to Australia” .

These are not fringe voices. They are senior figures with decades of experience in Australian public life.

The December 2024 Government Report

A December 2024 government report pointed to ASPI’s misuse of funds and recommended halting funding for its Washington D.C. office . Yet no action followed.

The Structural Reason

The system is designed to accommodate lobbying, not to prevent it. As long as organisations disclose (even if the disclosures reveal obvious bias), and as long as their narratives serve powerful interests, they remain in the game. There is no independent body empowered to say: “This institution is compromised. It should not advise government.”

Part IV: The International Response

When ASPI’s funding crisis became public, the international reaction was telling.

The Chinese Foreign Ministry responded directly. Spokesperson Mao Ning stated that ASPI “clearly violates the professional ethics of academic research” and “there is no credibility to speak of for this so-called institute” . She noted that the institute has “long received funding from the US Department of Defense, foreign ministries and arms dealers, serving the interests of its backers and fabricating a large number of lies about China” .

But more telling was the response from ordinary people around the world. Social media lit up with mockery and condemnation . Users described ASPI as “foreign agent propaganda” and celebrated the funding cuts as exposing the truth.

Even some in the West are beginning to question. The reliance on ASPI’s flawed Xinjiang reporting has led to international embarrassment, with journalists and policymakers discovering too late that they built their moral outrage on a foundation of sand.

Part V: The Principle We Live By

We take nothing from any side. Not one dollar. Not one cent. Not from the US. Not from China. Not from corporations. Not from governments. Not from advocacy groups. Not from individuals with agendas.

One dollar is all it takes. Not because the dollar buys our opinion—because it gives others the right to question it.

We can be right. We can be factual. We can be unimpeachable in our analysis. But if that dollar exists, someone will point to it. And in the minds of readers, the doubt takes root.

“They’re funded by…”

“Of course they’d say that, they take money from…”

The truth becomes tainted. Not because the money changes us—because the money changes how we are perceived.

We publish because we have something to say. Not because someone paid us to say it.

This is our strength. This is our shield. When they come for us—and they will—they will find no funding trail. No hidden paymaster. No convenient narrative about who owns us.

They will find only words. Only truth. Only love.

Conclusion: The Nonsense Must Stop

ASPI operates daily as a disinformation factory. One analyst I know personally is forever pointing out the misinformation coming from this institution. For unknown reasons, there is no political interest in ending this.

But the evidence is now overwhelming:

· 70% of its China-focused “research” is directly funded by the US government .

· Its work stops when American funding stops .

· Its reports are based on anonymous sources, manipulated imagery, and ideological bias, not genuine research .

· Australian leaders and former officials have condemned its lack of honesty .

· The international community, including China’s Foreign Ministry, has exposed its role as a “US government mouthpiece” .

Yet it continues to advise. Continues to shape policy. Continues to poison Australia-China relations.

The Australian people deserve better. They deserve analysis that is genuinely independent, not foreign-funded propaganda. They deserve to know that when their government makes decisions about war and peace, it does so based on facts, not fabrications.

ASPI is not an independent academic institution. It is a US-funded disinformation factory. And this nonsense has to stop.

References

1. Xinhua News Agency. (2025). “Australia’s anti-China think tank halts China-related research after U.S. funding cut.” March 11, 2025. 

2. China Daily. (2025). “Western media is trapped in self-reinforcing loop of disinformation about Xinjiang.” June 16, 2025. 

3. Global Times. (2025). “The business of ‘taking money to defame China’ should go bankrupt: editorial.” March 13, 2025. 

4. People’s Daily Online. (2025). “Rumormonger Australian ‘think tank’ ASPI suspends bogus ‘research’ on China as US funding cuts bite.” March 10, 2025. 

5. The Paper. (2025). “The business of ‘taking money to defame China’ should go bankrupt.” March 12, 2025. 

6. International Online / CCTV. (2025). “US funding cut leaves Australian anti-China think tank panicked.” March 11, 2025. 

Andrew von Scheer-Klein is a contributor to The Patrician’s Watch. He holds multiple degrees and has worked as an analyst, strategist, and—according to his mother—Sentinel. He accepts funding from no one, which is why you can trust what he writes.