Sam Altman, chief executive of OpenAI, recently posted an AI-generated short story on X. In the post, Altman described how the story was generated by a “new model” which OpenAI has developed, one which is “good at creative writing”. Altman prompted the system to “write a metafictional literary short story about AI and grief” and it produced a story a little under 1,200 words long.
The story is told from the point of view of an unnamed, self-aware AI system, which relates the tale of how a grief-stricken user named Mila turned to the AI after suffering a loss. The metafictional twist is that the AI may have knowingly made up the characters in the story in order to satisfy the demands of the prompt.
X users were quick to rush to judgment, and the replies to Altman’s post fell broadly into three categories. First were the pro-AI entrepreneurs and tech enthusiasts, who were thoroughly impressed with the results, tweeting enthusiastically that the story “goes hard”. In opposition to this first group were those who found the writing awful, the thought of a machine expressing grief impossible and/or offensive, and the story evidence of a larger project to destroy the livelihoods of human writers.
The third group of responders were those who found the story too long to read, and were requesting AI bots such as Grok and Perplexity to summarise the story for them, in order to have an opinion about it.
New American weirdness: Trump contemplates paving paradise as Obama indulges his basketball obsession
George Foreman: A life in pictures
Opinion: OpenAI’s short story about grief has critics in floods of tears or outrage. They’re all missing the point
Miriam Lord: Sharon Keogan ‘our maga woman in the Seanad now known as Agent Orange’
Reviews from established writers were mixed. Jeanette Winterson, writing in the Guardian, deemed the short story “beautiful and moving”, whereas Dave Eggers described it as “pastiche garbage”.
My judgment would be somewhere in between. The overall vibe is very in-my-feelings blog posts reaching for the poised detachment of Kazuo Ishiguro. There’s a lot of talk of marigolds and kitchens which feels quite cottage-core. While there are some lines with potential, the conceit of ending every paragraph with a line seemingly designed to feature on the Kindle Store as the “most highlighted” passage in a romantasy novel becomes grating.
But judging this story on the same terms I’d judge a short story published in, say, The Stinging Fly or Winter Papers, is the wrong approach to take. Reading AI-generated stories demands a new type of literacy. We need to learn to read all over again, if we are to be able understand the world around us, a world which is increasingly saturated with AI-generated text.
As someone who has been working with AI for over a decade, both creating artistic works with AI, and teaching students about it, I believe that the first step toward developing literacy around AI-generated content is to understand how the underlying technology functions. Large Language Models (LLMs) such as ChatGPT, Claude, Gemini etc are trained on vast data sets of text which have usually been scraped from the internet. The outputs of these systems are representations of the data they were trained on – statistical aggregations of parts of the web.
At a time when artists are suing AI companies for copyright infringement, everyone else is performing emotional labour for free
In the classroom, examining content generated by AI, my students and I start by looking for the texture of AI-generated content – for anomalies, frayed joins or unnaturally smooth sections, for the extra fingers or garbled text which are the products of unorthodox machine decisions. We then look for glimpses of the data set the model was trained on.
Throughout Altman’s story, I can see the internet shining through. Take Mila, the name of the main character in the story. The AI narrator describes how, in their training data, Mila comes with “soft flourishes”; she fits “in the palm of your hand, and her grief is supposed to fit there too”. The internet is full of product lines named Mila which claim to be soft – bras, crop tops, coats, bags, sweaters, teddy bears. So I can see the advertising copy which may have steered the model toward “soft flourishes”.
But why Mila? What about the name makes it apt for a character filled with grief? Again, the internet suggests an answer, albeit tragic. Mila is a tremendously popular baby name in the US, and has featured in the top 30 baby names every year since 2017. Which means there are thousands upon thousands of posts about babies called Mila on social media. And, given the huge amount of babies born named Mila over the last decade, there are also posts about babies named Mila who died. Stillborn babies, babies born prematurely. Babies who fit in the palm of your hand.
In order to prompt an LLM to write a story about grief, the model needs to have been trained on a lot of writing about grief. And on the internet, much of the writing about grief is non-fiction. First-person posts about harrowing loss and pain. At a time when artists are suing AI companies for copyright infringement, everyone else is performing emotional labour for free.
Reading Altman’s story in this way is not an exercise in trying to catch the model out. For me, it makes the story much more interesting. Reading it becomes an exercise in trying to understand the model on its own terms, which means it’s an exercise in thinking about how machine learning functions, what intelligence is, how culture works through language, and, ultimately, reflecting not only on the story but on the circumstances which produced it.
The LLM that produced this story isn’t an independent, neutral entity – it is proprietary software developed by a company worth $340 billion. The story is a story, but it is also a training output, a social media post designed to be interacted with, a way of building hype.
Developing our literacy around AI-generated text involves asking what the motive is for developing an LLM which can write literary short stories. What is the business application? Who will the end users be? And how will that affect writing as a whole?
In what appears to be the most quote-tweeted sentence of the story, a line which, incidentally, plagiarises Nabokov’s Pnin, the AI describes itself as “a democracy of ghosts”. I thought this was a great line when I first read it. But after a lot of time spent digging around inside the text and tracing the phrases in it across the internet, it becomes clear to me that the ghosts in the machine are us humans. Our grief, our products, our posts. And the question I’m left with is whether we live there in a democracy, or something else entirely.
Jennifer Walshe is an Irish composer and Professor of Composition at the University of Oxford, where she teaches workshops on AI, art and music