Fewer than three years ago, almost nobody outside of Silicon Valley, excepting perhaps science fiction enthusiasts, was talking about artificial intelligence or throwing the snappy short form, AI, into household conversations.
But then came ChatGPT, a chatbot quietly released for public online access by the San Francisco AI research company OpenAI in late November 2022. ChatGPT – GPT stands for Generative Pre-training Transformer, the underlying architecture for the chatbot – was to be made available as a “low-key research preview” and employees took bets on how many might try it out in the coming days – maybe thousands? Possibly even tens of thousands?
They figured that, like OpenAI’s previous release in 2021, the visual art-generating AI called Dall-E (a play on the names of the surrealist artist Dali and the Pixar film of eponymous robot, Wall-E),it would get a swift blast of attention, then interest would wane.
To prepare, OpenAI’s infrastructure team decided that configuring the company servers to handle 100,000 users at once would be over-optimistically sufficient. Instead, the servers started to crash as waves of users spiked in country after country. People woke up, read about ChatGPT in their news feeds and rushed to try it out. Within just five days, ChatGPT had a million users; within two months, that number had swelled to 100 million.
No one in OpenAI “truly fathomed the societal phase shift they were about to unleash”, says Karen Hao in Empire of AI, her meticulously detailed profile of the company and its controversial leader Sam Altman. Hao, an accomplished journalist long on the AI beat, says that even now, company engineers are baffled at ChatGPT’s snap ascendancy.
[ OpenAI chief Sam Altman: ‘This is genius-level intelligence’Opens in new window ]
But why should it be so inexplicable? While Dall-E also amazed, it was fundamentally a tool for making art. Although it could construct bizarre and beautiful things (while exploiting the work of actual artists it was trained on), it wasn’t chatty. ChatGPT, in thrilling contrast, hovered on the edge of embodying what people largely think a futuristic computer should be. You could converse with it, have it write an essay or code a piece of software, ask for advice, even joke with it, and it responded in an amiably conversational and, most of the time, usefully productive way.
Dall-E felt like a computer programme. ChatGPT teased the possibility of the kind of sentient, thoughtful artificial intelligence that we easily recognise, given that this presentation has been honed over decades of films, TV series and science fiction novels. We’ve been trained to expect it – and to create it. While ChatGPT is definitely not sentient, it astonished because it seemed as if it might be, and OpenAI has continued to ramp up the expectation that an AI model might soon be, if not fully sentient, then smarter than human. No surprise, really, that Hao writes that “ChatGPT catapulted OpenAI from a hot start-up well known within the tech industry into a household name overnight”.
As big as that moment was, there’s so much significant backstory for the “hot start-up” that the tale of the game-changing release of ChatGPT doesn’t materialise until a third of the way into Empire of AI.
With precision and insight, Hao documents the challenges and decisions faced and resolved – or often more crucially, not resolved – in the years before ChatGPT turned OpenAI into one of the most disturbingly powerful companies in the world. Then, she takes us up to the end of 2024, as valid concerns have further ballooned over OpenAI and Altman’s bossy and ruthless championing of a costly, risky, environmentally devastating and billionaire-enriching version of AI.
In this convincing telling, AI is evolving into the design and control of an exclusive and dangerous club to which very few belong, but for which many, especially the world’s poorest and most vulnerable, are materially exploited and economically capitalised. Hence, truly, the “empire” of AI.
OpenAI, which leads in this space, was founded in 2015 by Altman – who then ran the storied Valley start-up incubator Y Combinator – and by Elon Musk. Both (apparently) shared a deep concern that AI could prove an existential risk, but recognised it could also be a transformative, world-changing breakthrough for humanity (take your pick), and therefore should be developed cautiously and ethically within the framework of a non-profit company with a strong board. (This split between “doomers”, who see AI as an existential risk, and “boomers”, who think it so beneficial we should let development rip, still divides the AI community.)
Now that the world knows Altman and Musk quite a bit better, their heart-warming regard for humanity seems improbable, and so it’s turned out to be. Hao says that fissures appeared from the start between those in OpenAI prioritising safety and caution and those eager to develop and, eventually, commercialise products so powerful they perhaps heralded the pending arrival of AI that will outthink and outperform humans, called AGI or artificial general intelligence.
Altman increasingly chose the “move fast, break things” approach even as he withdrew OpenAI from outside scrutiny. Interestingly, several of OpenAI’s earliest and problematical top-level hires were former employees of Stripe, the fintech firm founded by Ireland’s Collison brothers. Despite having such top industry people, OpenAI “struggled to find a coherent strategy” and “had no idea what it was doing”.
What it did decide to do was to travel down a particular AI development path that emphasised scale, using breathtakingly expensive chips and computing power and requiring huge water-cooled data centres. Costs soared, and OpenAI needed to raise billions in funding, a serious problem for a non-profit since investors want a commercial return.
Cue the restructuring of the company in 2019 into a bizarre, two-part vehicle with a largely meaningless “capped profit” and a non-profit side, and the need for a CEO, a job that went to Altman and not Musk. Microsoft came on board as a major partner too; Bill Gates was wowed by OpenAI’s latest AI model months before the release of ChatGPT.
As dramatic as the ChatGPT launch turned out to be, Hao makes the strategic choice to open the book with a zoom-in on OpenAI’s other big drama, the sudden firing in November 2023 of Altman by its tiny board of directors. The board said Altman had lied to them at times and was untrustworthy. After a number of twists and turns, Altman returned, the board departed, and OpenAI has since become increasingly defined as a profit-focused behemoth that has stumbled into numerous controversies while tirelessly pushing a version of AI development that maintains its staggeringly pricey leadership position.
This, then, is Hao’s framing device for looking at a company headed by an undoubtedly charismatic and gifted individual but one who has trailed controversy and whose documented non-transparency raises serious concerns. In tracing the company’s early history, Hao sets out its many conflicts and problems, and Altman’s willingness to drive development and growth in ways that veer far from its original ethical founding.
For example, at first OpenAI adhered to a principle of using only clean data for training its models – that is, vast data sets that exclude the viler pits of internet discussion, racism, conspiracy rabbit holes, pornography or child sexual abuse material (CSAM). But as OpenAI scaled up its models, it needed ever more data, any data, and rowed back, using what noted Irish-based cognitive scientist Abeba Birhane – referenced several times in the book – has exposed as “data swamps”. That’s even before you consider AI’s inaccuracies, “hallucinations” of made-up certainty, and data privacy and protection encroachments.
For a time, Hao veers away from a strict OpenAI pathway to draw on her strong past travel research and reporting to reveal how AI is built off appallingly cheap labour drawn from some of the poorest parts of the world, because AI isn’t all digital wizardry. It’s people being paid pennies in Kenya to identify objects in video or perform gruelling content moderation to remove CSAM. It’s gigantic, water use-intensive data centres built in poorer communities despite years-long droughts, and environmentally damaging mining and construction. It’s cultural loss, as data training sets valorise dominant languages and experiences.
In the face of these data colonialism realities, using an AI chatbot to answer a frivolous question – requiring 10 times the computing energy and resources of an old-style search – is increasingly grotesque.
Unfortunately, the book went to print before Hao could consider the groundbreaking impact of new Chinese AI DeepSeek. Its lower cost, and challenge to OpenAI and the massive scale mantra, has rocked AI, its largely Valley-based development and global politics. It would have been fascinating to get her take. But never mind. Hao knits all her threads here into a persuasive argument that AI doesn’t have to be the Valley version of AI, and OpenAI’s way shouldn’t be the AI default, or perhaps, pursued at all.
The truth is, no one understands how AI works, or why, or what it might do, especially if it does reach AGI. Humanity has major decisions to make, and Empire of AI is convincing on why we should not allow companies such as OpenAI and Microsoft, or people such as Altman or Musk, to make those decisions for us, or without us.
Further reading
Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L Gray and Siddarth Suri (Harper Business, 2019). What looks like technology – AI, web services – often only works due to the task-based, uncredited labour of an invisible, poorly paid, easily-exploited global “ghost” workforce.
Supremacy: AI, ChatGPT and the Race that Changed the World by Parmy Olson (Macmillan Business, 2024). A different angle on the startling debut of OpenAI’s ChatGPT, with the focus here on the emerging race between Microsoft and Google to capitalise on generative AI and dominate the market.
The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil (Duckworth reissue, 2024). The hugely influential 2005 classic that predicts a coming “singularity” when humans will be powerfully enhanced by AI. Kurzweil also published a follow-up last year, The Singularity is Nearer: When We Merge with AI.