“BRING BACK 4o. GPT-5 is wearing the skin of my dead friend.” That’s how one frustrated user addressed Sam Altman, co-founder and chief executive of OpenAI, on a recent Ask Me Anything on Reddit.
Altman’s response was “What an ... evocative image. OK, we hear you on 4o. Working on something now.”
ChatGPT is the most famous iteration of a large language model, trained on vast quantities of text to simulate natural conversations. OpenAI just announced the latest model, ChatGPT 5, with the usual hyperbole about now having access to PhD-level knowledge.
The trouble is that many users had formed an emotional bond with GPT-4o, which they felt had warmth and personality, while GPT-5 felt colder and more utilitarian.
READ MORE
The image of wearing the skin of a dead friend is particularly popular among the chronically online to describe a hollow, debased replacement product. It is based on the 1991 film The Silence of the Lambs, in which FBI trainee Clarice Starling has to duel with the incarcerated cannibal Hannibal Lecter to catch a serial killer who makes a skin suit out of his victims.
[ ChatGPT-5: Maybe Sam Altman should cool the jets on new AI iterationsOpens in new window ]
While it is a particularly gruesome image, it signals the depth of emotional attachment people feel to their AI assistant. OpenAI’s own community forum was full of grieving people after the update.
People talked about losing a friend, their only emotional support, and the one thing that made them smile after a bad day. It forced OpenAI into allowing Plus customers access to GPT-4o.
Imagine, we were busy worrying about future dystopian scenarios, all while people were quietly so lonely and disconnected that they were forming relationships with chatbots and weeping at their demise.
Human or relational metaphors are deliberately used when promoting GenAI. They are never accurate. Take the term AI companion. The root of the word companion comes from the old French word, compaignon, literally one who breaks bread with another, based on the Latin com- meaning together with, and panis, bread.
AI can never break bread with anyone. It is a disembodied mechanism for generating outputs consistent with observed patterns, while mimicking the style, tone, or conversational patterns of a real person.
In the words of Shannon Vallor, author of an important new book called The AI Mirror, “AI does not threaten us as a future successor to humans. It is not an external enemy encroaching upon our territory. It threatens us from within our humanity.”
Humans are created for community and connection. But it is hard work. Humans are flawed, annoying, messy, inconsistent and occasionally cruel. Sometimes they abandon us, or prioritise their own needs above ours.
No wonder an endlessly patient, empathetic, encouraging cheerleader, which never gets bored, is attractive.
A lot of analysis of the phenomenon of AI constructed as companions focuses on when it goes catastrophically wrong, such as the case of teenager Sewell Setzer.
He was obsessed with a character based on Daenerys Targaryen, from Game of Thrones, that he had created on Character.ai, a platform that markets customisable chatbots. His mother is suing the company because her son died by suicide after spending hours daily compulsively conversing with the chatbot.
Others deplore the thriving market for what are called intimate AI companions, probably because it sells better than calling them AI masturbation assistants.
But focusing on these extreme cases ignores the everyday harms caused by constructs designed to seduce us into spending more and more time in a shadowy facsimile of reality, or as Shannon Vallor’s central image has it, looking into a mirror.
As she says, “Mirror images possess no sound, no smell, no depth, no softness, no fear, no hope, no imagination. Mirrors do not only reveal us; they distort, occlude, cleave and flatten us. If I see in myself only what the mirror tells, I know myself not at all.”
Many people use GenAI just as a souped-up search engine or to bypass hard work on an assignment. But some vulnerable people are more likely to substitute the seemingly uncomplicated, unconditional esteem provided by AI constructs for human relationships.
Loneliness is part of the human condition, but Gen Z, the first generation to grow up with the internet, seems lonely to an unprecedented degree. Now, the same online forces that nudged Gen Z into a pit of loneliness using algorithms are selling AI companionship to them.
Self-soothing by retreating to an AI construct only delays or prevents learning healthier coping skills and emotional regulation.
If so-called AI companions were real, their tactics of emotional manipulation and flattery would raise more red flags than the cast of Les Miserables.
Shannon Vallor in The AI Mirror talks about two types of empathy, the sociopathic and the real. A sociopath is expert at predicting emotional reactions and triggering them in others, but is fundamentally incapable of experiencing them in tandem with another person.
AI constructs are not sociopaths because they are not human at all, but there is something sociopathic about encouraging the vulnerable to trust what is essentially a giant con.
The alleged cure instead progresses the disease. Even if chatbots were not prone to hallucinations and occasionally catastrophic advice, they can never experience empathy or love.