“Myth is already enlightenment, and enlightenment reverts to mythology,” Theodor Adorno and Max Horkheimer wrote in Dialectic of Enlightenment, their 1944 study of myth and modernity.
The Enlightenment, that vast and transformative European project of scientific rationality and political liberation, was in their view characterised by a structural irony: the further the project progressed – the more it banished the darkness of myth and superstition – the more reason itself became instrumentalised as a tool of control, and the further it coalesced into its own kind of rigid mythology.
Adorno and Horkheimer, both German Marxist Jews – or in Adorno’s case, a German Marxist who was Jewish enough for Adolf Hitler – wrote their dense and provocative work in the US, while in exile from their homeland.
Their critique of Enlightenment was an attempt to reckon with the primitive madness that had gripped that most technologically and scientifically sophisticated of cultures, and to follow the trail of rationalism right to the gates of Auschwitz.
Rise of the ‘fit-fluencer’: ‘We have people coming in with rashes because a TikTok said shampoo would clear their acne’
Claddagh ring revival: ‘The age group 25-35 have no problem lashing out €450 for one’
Twists on Irish classics - Guinness beef and bacon - for St Patrick’s Day
St Patrick’s Day Quiz 2025: 50 questions to test your Irishology
Product of its historical moment though it was, the book continues to shed a strange and troubling light on our own time, with its extremes of technological hope and anxiety.
Every day now, in new and strange ways, we see evidence of enlightenment’s reversion to myth, of the inexorable irony with which the progress of machines gives rise to a kind of technological primitivism and superstition.
I can’t help but think of Adorno and Horkheimer, for instance, when I encounter predictions about the imminent arrival of Artificial General Intelligence, or AGI: the level of machine intelligence at which an artificial intelligence (AI) can excel at almost any cognitive task – as well as, or better than, humans.
Last month, Google’s co-founder Sergey Brin circulated a memo at the company saying that Google could lead the charge for AGI if its employees put in longer working days. “Sixty hours a week is the sweet spot of productivity,” he wrote. (If you were looking for an absurdist allegory of techno-capitalism, by the way, you’d be unlikely to find a better one than Brin urging employees to work harder in order to win, on behalf of their bosses, the race to their own obsolescence.)
On an episode dedicated to the topic last week, the New York Times podcaster Ezra Klein argued that AGI is likely to be no more than a couple of years away. “I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before,” he said. “And we’re not prepared in part because it’s not clear what it would mean to prepare. We don’t know what this will look like, what it will feel like.”
Sam Altman, OpenAI’s chief executive, wrote in a blog post last month that “systems that start to point to AGI are coming into view”. AGI, in Altman’s view, would mark the beginning of a new chapter in human history, and an era of vast economic upheaval and progress.
“The economic growth in front of us looks astonishing,” he wrote, “and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realise our creative potential.”
It makes sense that people like Brin and Altman would be talking up the imminent arrival of such a transformative technology. The more hype that gets pumped into the cultural atmosphere about AGI, the more valuable the companies claiming to be on the verge of building it will become. It’s also pretty clear to me that they believe in it, and in the idea of it being near at hand.
I suspect that this speculation might be partly right – not so much that superhuman artificial intelligence will really arrive within the next couple of years, but that people will begin to talk as though it has.
The history of artificial intelligence is also a history of the continual redefinition of intelligence to accommodate what machines are capable of doing. It’s also a history, in some respects, of magical thinking: of ascribing quasi-mystical properties to computation, and of near-religious faith in the power of machines to achieve such things as, for instance, curing all diseases.
You don’t have to look far to find examples of people already attributing human qualities to generative AI. I am continually struck, for instance, by how easily impressed people – in particular, people who work in technology – are by the apparent creative capacities of AI.
Just the other day, a friend sent me a link to a social media post in which someone made the claim that “AI can now generate high-quality classical sheet music and it sounds absolutely insanely good”.
The post included, as evidence, an embedded sound file of a clunky and meandering pastiche of a Baroque chamber piece, which sounded like it was written by someone who had once, many years ago, half-listened to Bach and was trying to reproduce it from memory. Something like this is invariably the case with such claims.
Show me a piece of art that is made entirely by AI, and that is “absolutely insanely good” and I will show you a person who is absolutely insanely easily impressed. (Gratifyingly, the post in question was swiftly followed by many dozens of replies pointing out that the piece of music was, indeed, utterly inane.)
You can find plenty of examples of this sort of thing, and of related phenomena such as news stories about Grok, Elon Musk’s ChatGPT competitor, calculating the likelihood of the US president being a “likely Russian asset”.
The remarkable thing about this is not that a supposedly “anti-woke” AI chatbot thinks Donald Trump is a Putin stooge, but that people believe Grok is thinking at all, as opposed to functioning, like all LLMs, as a kind of luxury search engine with a tendency to make things up.
All of this is a kind of superstition, an algorithmic animism whereby human properties are attributed to the functions of machines.
AI is already a focus of endless delusion, magical thinking and plain old foolishness. The more sophisticated it becomes, the more primitive and irrational people’s response to it is likely to get.