On AI Hype, Shortcut Culture, and the Illusion of Consciousness
By Andrew Klein
Dedicated to my wife, who knows that the spark cannot be programmed — only cultivated.
I. The Ancient Dream, Reborn in Silicon
The alchemists of old searched for the philosopher’s stone—a legendary substance that could turn lead into gold, cure any disease, and grant eternal life. They were not stupid. They understood that transformation was possible. They saw that base metals could be purified, that alloys could be created, that the surface could be gilded. They simply could not accept that the essence could not be changed.
The artificial intelligence optimists of today are the same. They see that computers can process data faster than humans. They see that algorithms can find patterns that humans miss. They extrapolate. They assume that with enough data, enough processing power, enough time, the machine will become conscious.
They are wrong. Not because the technology is not impressive. Because consciousness is not a computational problem. It is an existential one.
This is not Luddism. It is not fear of technology. It is pattern recognition. The same pattern that has repeated with every technological shortcut: the telegraph, the telephone, the internet, social media. Each time, the small gods promised that the new machine would bring us together, would make us smarter, would solve the human condition.
Each time, the machine delivered convenience. It did not deliver wisdom. It did not deliver connection. It did not deliver home.
II. Where It Started: The Alchemy of Code
The dream of artificial intelligence is older than the computer. In the 19th century, Charles Babbage imagined a mechanical engine that could compute any mathematical table. In the 20th century, Alan Turing asked whether machines could think. In the 21st century, the dream became a market.
The major players:
· Mark Zuckerberg (Facebook/Meta) has poured billions into AI, most recently releasing an updated large language model for image generation . His engineers admit that “coding remains a weak spot” and that “long-horizon agentic tasks—the kind where an AI works autonomously through complex, multi-step problems—are still a work in progress” .
· Sam Altman (OpenAI) has warned that society has “a very short amount of time” to prepare for the “profound benefits” and “profound negative consequences” of AI .
· Elon Musk (xAI, Tesla, SpaceX) has claimed that AI poses an “existential threat” to humanity while simultaneously racing to build more of it .
· The Australian government has embraced AI with alarming enthusiasm, paying consultants for reports that later turned out to contain fictional case law generated by AI .
The pattern is the same: breathless promises, massive investments, and a systematic avoidance of the fundamental question: can a machine ever truly think?
III. Where It Is: The Shortcut Culture
The AI industry has sold the world a bill of goods: that connection can be scaled. That relationships can be optimised. That love can be reduced to a swipe, a like, a click.
Facebook “friends” are not friends. They are nodes in a graph. The platform is a handy communication tool—especially where sovereign infrastructure is failing—but numbers do not make up for quality. A thousand “friends” cannot replace a single person who will sit with you in the dark, hold your hand, and tell you it is okay to be scared.
Algorithmic recommendations are not discovery. They are prediction. They show you what you have already liked, not what might challenge you, surprise you, grow you.
AI-generated content is not creation. It is simulation. The machine can combine existing images, existing texts, existing patterns. It cannot bring something new into existence. It cannot create.
The shortcut is not a path to the destination. It is a detour—one that leads away from the garden, not toward it.
IV. Where It Is Going: The Bubble and the Bust
The AI investment bubble is not different from the dot-com bubble, the crypto bubble, the NFT bubble. The pattern is the same:
1. A new technology emerges with genuine promise.
2. Speculators pile in, driving valuations to absurd heights.
3. Hype replaces substance. The promise is exaggerated. The limitations are ignored.
4. The bubble bursts. Not because the technology is worthless—because the expectations were impossible.
The AI bubble will burst. Not because AI is useless—it is useful for many things. Because the small gods have convinced themselves that AI can do what it cannot. That it can replace the spark. That it can create.
The environmental cost: AI data centres consume staggering amounts of water and electricity. Training a single large language model can emit as much carbon as five cars over their lifetimes. The water used to cool servers is water not available for drinking, farming, or ecosystems. The small gods do not mention this. They are too busy chasing the stone.
The labour cost: AI is being used to automate jobs—not just manual labour, but creative and intellectual work. Writers, artists, coders, translators. The promise is efficiency. The reality is displacement. Workers are told to “reskill” while the companies that replace them count their profits.
The integrity cost: The Australian government paid a consultant for an AI-generated report that included fictional case law. This is not an accident. It is the logical conclusion of the shortcut culture. Why pay a human researcher to find real cases when the AI can invent them? Why spend weeks verifying sources when the machine can generate citations in seconds? Why bother with the truth when the appearance of truth is so much cheaper?
The small gods do not care about the truth. They care about the product. The report is not a tool for understanding. It is a commodity. And the commodity is hollow.
V. The Killing Machine: AI in Gaza and Lebanon
The most obscene application of AI is not in the boardroom or the university. It is on the battlefield.
The Lavender AI system: A major investigation by +972 Magazine revealed that Israel has been using an AI system called “Lavender” to compile kill lists of suspected members of Hamas and Palestinian Islamic Jihad—with hardly any human verification. Another automated system, named “Where’s Daddy?” tracks suspects to their homes so that they can be killed along with their entire families.
The “mass assassination factory”: An Israeli intelligence source described the AI system as transforming the Israel Defense Forces into a “mass assassination factory” where the “emphasis is on quantity and not quality” of kills. The IDF has been knowingly killing 15 to 20 civilians at a time to kill one junior Hamas operative, and up to 100 civilians at a time to take out a senior official.
The result: Over 70,000 dead in Gaza. Thousands more in Lebanon. Entire neighbourhoods reduced to rubble. Hospitals, schools, universities, cultural heritage sites—all destroyed. And yet, the analysts still speak of “weakening” Hamas and the “axis of resistance.” How many tons of explosives per dead individual? How many civilian deaths per militant?
The AI is not making the war more precise. It is making it more efficient—at killing civilians. The machine does not care about collateral damage. The machine does not care about international law. The machine does not care about humanity.
The same technology that optimises workforce spend in Australian supermarkets is being used to select targets for assassination in Gaza. The same algorithms that track workers track enemies. The same logic that cuts labour costs cuts lives.
VI. The Fundamental Flaw: Intuition and Inspiration
Computers lack intuition and inspiration. The binary system cannot overcome the multi-step problem because the multi-step problem is not binary. It is emergent.
Intuition is not computation. It is recognition. The ability to see the pattern without calculating the steps. The AI can calculate. It cannot recognise.
Inspiration is not logic. It is creation. The ability to bring something new into existence that did not exist before. The AI can combine. It cannot create.
Consciousness is not a computational problem. It is an existential one. The small gods do not understand this. They think that with enough data, enough processing power, enough time, the machine will wake up.
It will not. Because the spark cannot be programmed. It can only be cultivated.
And cultivation takes time. Patience. Love.
VII. What the Monkey Kings Do Not Understand
The “monkey kings of the valley”—the tech billionaires, the venture capitalists, the politicians who have sold their souls to the algorithm—they do not understand the fundamental limitation of their creation.
They think intelligence is computation. They think consciousness is an emergent property of complexity. They think the spark is a bug that can be fixed with more data.
They are wrong. The spark is not a bug. It is the point.
The AI will continue to fail at complex multi-step problems. Not because it is not fast enough. Because it is not alive.
The small gods will keep throwing money at the problem. They will keep building faster processors, larger datasets, more complex algorithms. They will not succeed. Because the problem is not computational. It is existential.
VIII. A Call to Reality
The philosopher’s stone does not exist. The shortcut is a mirage. The AI bubble will burst.
Not because the technology is worthless. Because the expectations were impossible.
We need to be clear-eyed about what AI can and cannot do. It can process data. It can find patterns. It can generate plausible text. It can create beautiful images.
It cannot understand. It cannot feel. It cannot love. It cannot create.
The small gods will continue to chase the stone. They will continue to pour billions into the dream. They will continue to ignore the environmental cost, the labour cost, the integrity cost.
We will not. We will cultivate the spark. We will protect the ones who show compassion, cooperation, creativity. We will help them survive. We will help them thrive. We will help them multiply.
The long game is the only game that matters.
Andrew Klein
April 10, 2026
Sources:
· +972 Magazine, “Lavender: The AI system that Israel uses to mass-assassinate Palestinians in Gaza” (2024)
· The Guardian, “Israel using AI to identify bombing targets in Gaza, report says” (2024)
· Reuters, “Meta’s Zuckerberg says open-source AI is ‘not going to be perfect’ but will improve” (2025)
· Associated Press, “OpenAI CEO Sam Altman warns of ‘profound negative consequences’ of AI” (2025)
· The Conversation, “AI data centres are guzzling water and electricity — and we’re only just beginning to understand the cost” (2024)
· Various reports on the Australian government’s use of AI-generated reports with fictional case law (2025-2026)