Elon Musk sees humanity’s purpose as a facilitator of superintelligent AI. That should worry us

It’s hard to think of a more damning indictment of our time than the fact that a man who holds humanity itself in such contempt has been granted such unprecedented wealth, power and cultural influence

Elon Musk is not an intellectual but his ideas have a lot in common with Nick Land, an English philosopher who is seen as an intellectual progenitor of accelerationism – the notion that technology should be used to maximise the growth of capitalism. Photograph: Haiyun Jiang/The New York Times
Elon Musk is not an intellectual but his ideas have a lot in common with Nick Land, an English philosopher who is seen as an intellectual progenitor of accelerationism – the notion that technology should be used to maximise the growth of capitalism. Photograph: Haiyun Jiang/The New York Times

During an interview published earlier this month on the website Vox, the American technologist and writer Jaron Lanier came out with one of the more unsettling remarks I’ve encountered recently. Lanier is one of the more interesting of Silicon Valley’s in-house intellectuals. He was an early developer of virtual reality technologies in the 1980s, and is often credited (and credits himself) with coining the term to describe that technology. He’s a genuine believer in the human possibilities and social benefits offered by information technology, but also a trenchant and vehement critic of the anti-human tendencies within the culture of Silicon Valley.

Lanier’s value as a public intellectual has always struck me as being somewhat limited by his tendency, common among Silicon Valley thinkers, to see social and political issues as primarily engineering problems. But he is generally worth reading and listening to as a tech industry insider whose critiques are motivated by both a love of technology and a deep liberal humanism – a love, that is, for technology as a human art form that is inseparable from a love of humanity itself.

The remark in the Vox interview that I found especially unsettling was one in which Lanier addressed the ascendancy of precisely the opposite tendency within the elite of Silicon Valley. He is, he says, constantly talking to people who believe that we need to put everything into developing superhuman artificial intelligence, recognise its status as a higher form of intelligence and being, and simply get out of its way.

“Just the other day,” he says, “I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a ‘bio baby’ because as soon as you have a ‘bio baby,’ you get the ‘mind virus’ of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.”

READ MORE

In one sense, I should not find this anecdote especially unsettling. I have written fairly extensively about this particular constellation of ideas, and the tech industry milieu within which it exists. In writing my first book, which explored this anti-human veneration of machines, I met many people who advanced some or other variation of the ideas Lanier sketches here. But artificial intelligence, though it was certainly an article of faith among the people I spoke to, was at that point largely an abstract concept for society at large.

There was no real sense, in 2014 and 2015, that we would so quickly come to live in a world so defined and deformed by this technology. The ideas Lanier speaks about – that humanity is in its last decadent days, and that the role of technologists was to bring about the advent of the superintelligent machines that will succeed us – were at the time niche, considered fairly eccentric within the context of radical Silicon Valley techno-optimism. As Lanier points out, this attitude is now a very common one in the circles in which he moves.

The idea, or perhaps more accurately ideology, is known as “effective accelerationism”; its adherents advocate for the rapid advancement of machine learning technology, unfettered by any kind of regulation or guardrails, in an effort to hasten the advent of AI superintelligence. The more utopian versions of this idea see such a superintelligence as a means toward the solution of all human problems – economic abundance for all, the curing of every known disease, solutions to climate change, and so on. In its darker inflections, it envisions such an intelligence as vastly superior to humanity, and destined to overthrow and replace us entirely.

Mark O'Connell: It’s degrading to have to take anything Elon Musk says seriously, but he’s right about one thingOpens in new window ]

As an idea, it’s most associated with the English philosopher Nick Land. As a professor at the University of Warwick in the 1990s, Land was a central figure of an influential circle of rogue academics who embraced the utopian ideas around early internet culture; he went Awol around the turn of the century and resurfaced, in Shanghai, as an enigmatic and fugitive thinker whose increasingly anti-human and anti-democratic writing advocated a kind of techno-fascism in which all human life would be subordinate to, and ultimately obliterated by, the supremacy of AI. “Nothing human,” as he memorably and chillingly put it, “makes it out of the near future.”

In a sense there is nothing very new about this kind of thinking: the fascists of the interwar years combined a great enthusiasm for new technology with a contempt for Enlightenment values of liberal democracy and humanism. The Italian Futurists fetishised machinery, speed and violence, glorifying war as “the only hygiene of the world”.

Land remains a niche figure, but he has his constituency, and his ideas have been influential in Silicon Valley. The disturbing ideology Lanier encountered at that Palo Alto lunch are clearly derived from his writing. And then there’s Elon Musk, who is himself unquestionably among the most influential people on the planet. Just a few weeks ago, Musk made the following statement on X, the social network that he owns and on which he is the most followed account: “It increasingly appears that humanity is a biological bootloader for digital superintelligence.”

Mark O'Connell: Elon Musk makes being a billionaire plutocrat look profoundly uncoolOpens in new window ]

A bootloader, to be clear, is a piece of code that initiates the start-up of a computer’s operating system when it’s powered on. In other words, humanity, in the view of the world’s richest and arguably most influential man, is important only as a necessary facilitator of superintelligent AI. I wouldn’t want to bet that Musk has read Land – my sense of it is that the extent of his reading is the cry-laughing emojis posted by sycophants under his bad jokes on X – but his invocation of the biological bootloader notion suggests the extent to which Land’s ideas have filtered through. And it’s hard to think of a more damning indictment of our time than the fact that people with such an openly anti-human worldview, who hold humanity itself in such contempt, have been granted such unprecedented wealth, power and cultural influence.