The other day, I listened to a short and very interesting podcast. Its hosts, a chipper American male/female duo whose names I didn’t catch, discussed a recent newspaper column that used the idea of the so-called “simulation hypothesis” to consider a sense of alienation and absurdity in our increasingly mediated world. It was a pretty heady topic, but the hosts approached it with a jocular enthusiasm that made it listenable, if a little shallow. The podcast covered a lot of ground in its 16-minute run time: a recent Trump campaign event that devolved into a kind of listening party, with Trump standing on stage, and occasionally dancing, through a playlist of songs; the philosophy of Plato and Descartes; the columnist’s diagnosis of a “crisis of seriousness” in contemporary politics, exemplified by the Trump listening party, whereby the dire problems the world currently faces are being essentially ignored by political leaders.
The hosts seemed to me to have a pretty good rapport going – to the point where they frequently finished each other’s sentences:
Woman: To be fair to the author, you know, they’re not necessarily saying ...
Man: That they believe that we’re literally living in a simulation, created by, you know ...
It’s embarrassing how we Irish fawn over the super-rich, from Michael O’Leary to JP McManus
Christmas - and the perfect family life it represents - is an oppressive fantasy
Emigration to Australia is at its highest level for a decade. We need to ask why
‘On the ninth day of Christmas, Nietzsche said with glee: sever links with your family’
Woman: ... superintelligent beings.
Man: They’re using the simulation hypothesis more as a metaphor.
Woman: Right, as a way to capture this pervasive sense of, uh, detachment and absurdity ...
If you’re a regular reader of this column – and I’ll extend you the courtesy of assuming you are – you will likely already have realised that the topic under discussion in this podcast is, in fact, my column in this paper last week. You may also have realised that the podcast is not really a podcast at all; that it is rather a simulacrum of one, produced by uploading the text of the column to Google’s new NotebookLM application, whose “Deep Dive” feature converts texts into plausibly snappy podcast discussions. You could, presumably, upload a pdf of The Irish Times to the thing, and have the whole shooting gallery – Martyn Turner cartoons, reports of murders and planning disputes, Leinster Schools Senior Cup rugby coverage, several servings of why Sinn Féin aren’t fit to govern, TV listings – delivered to you in the form of an interminable back-and-forth between two people who, despite not existing, are pretty intrigued by it all.
I’m often quite pessimistic about the tech world in general, and about artificial intelligence (AI) in particular, but I find this bit of software wizardry genuinely impressive. These are not Siri-style voices, reading without tonal variation from an obviously generated script. The “hosts” here talk in the manner of normal people, albeit American ones: they hesitate, stumble and cut across each other; their speech is liberally interspersed with phatic expressions, and with filler words such as “like”, “uh” and “so”.
Usefulness is not really the point of this gadget, any more than it is of most generative AI tools. It works, largely, as entertainment, as a sort of automated parlour trick
I like to think I have a fairly highly developed instinct for when I’m looking at or reading a thing – an image, a text, a video – created by AI, but if I hadn’t generated this “podcast” myself, I suspect it would have taken me a few minutes of moderately attentive listening before I realised something – in this case, the presence of actual human beings – was clearly amiss. And one of the interesting and unsettling aspects of the current phase of generative AI is that this uncanny horizon is being continually pushed further and further out.
The idea here is that the app makes complicated information easier to take in. You upload a pdf of a dense academic paper, say, and a few minutes later you get a recording of two people breezily going through its finer points with each other. In principle, it’s fairly sound. But because it’s based on the same large language model as ChatGPT, it’s similarly prone to errors, strange leaps in logic, and just plain bullshitting.
Earlier today – out of curiosity, and I suppose a certain amount of perversity – I uploaded the first 20 pages of In Search of Lost Time and clicked on the “Generate Deep Dive Conversation” button. Within minutes, I was listening to our friends – I’m going to call them Bill and Barbara here, for convenience – discussing Proust as though he were the writer of a pop-science book about memory and identity.
Bill: He’s basically saying we should all be a little more, well, like him, right? Observing our minds, trying to understand how they work.
Barbara: Yeah, I think so. And, uh, by doing that, we can really start to appreciate the power of memory, how it shapes our identity, our relationships, our understanding of the world. It’s like Proust is saying, hey, it’s okay to be a little introspective ...
Bill: ... a little more curious about what’s going on in our own heads.
Barbara: Right? And to appreciate how beautiful and how complex those internal landscapes can be. It’s about slowing down, paying attention to those sensory experiences ...
Etc, etc. It’s not good, exactly. But if you wanted, for some reason, to boil Proust’s writing down to the most banal possible insights about mindfulness, without having to read any of it yourself, you can just about see how it might be useful. (I’m willing to admit that, had this technology existed when I was an undergraduate, I might not have been above running a pdf of Beowulf through the thing and firing up a podcast about it en route to a seminar for which I hadn’t done the required reading.)
[ Elon Musk makes being a billionaire plutocrat look profoundly uncoolOpens in new window ]
But usefulness is not really the point of this gadget, any more than it is of most generative AI tools. It works, largely, as entertainment, as a sort of automated parlour trick. As the tech journalist Max Read put it in a recent edition of his excellent newsletter Read Max, one of the common qualities of generative AI apps is that “their popular success is as much (and more often) about novelty and entertainment value than actual utility”. Another is that they are almost immediately used to create “slop” – the sort of low-quality content, made by nobody for nobody, with which online platforms have become increasingly overrun since the recent mass availability of generative AI. Earlier this month, Read pointed out that OpenAI’s co-founder Andrej Karpathy generated a new “podcast” of 10 episodes, called Histories of Mysteries, out of Wikipedia articles on historical mysteries, which he then uploaded to Spotify.
And if you’re wondering, the answer is yes, of course I uploaded the column you’re currently reading to NotebookLM, to see what old Bill and Barbara made of it. Over to your hosts:
Barbara: Welcome back everybody for another Deep Dive. This time, we’re looking at, uh, AI ...
Bill: Oh yeah!
Barbara: But through a pretty unique lens.
Bill: Definitely.
Barbara: We’re going to be diving into a newspaper column ...
Bill: Okay ...
Barbara: Which discusses a podcast discussing the column itself.
Bill: Woah. Very meta!
Barbara: It is very meta, yeah. It’s fascinating.
To be honest, I don’t hate it. I am, I’ll admit, a little flattered. I’ll take my praise where I find it, even if I have to generate it myself.