Subscriber OnlyMusic

Irish composer Jennifer Walshe on AI music: ‘If you came up with the idea for I Glued My Balls to My Butthole Again, is that art?’

The Oxford professor of composition’s long use of artificial intelligence in her work makes her a good judge of its merits and its dangers


Long before the current media obsession with AI, and the free availability of song-generating programs such as Udio and Suno, Jennifer Walshe was experimenting with artificial intelligence. In 2018 the acclaimed composer improvised with an AI-generated version of herself in a project called Ultrachunk. In 2020 she had an AI system imagine an alternative version of 20th-century music in her Late Anthology of Early Music. The same year she generated an AI Enya in a project called Ireland: A Dataset. Walshe, who is also professor of composition at the University of Oxford, recently wrote an excellent 8,000-word essay for the Polish Unsound festival called 13 Ways of Looking at AI.

“I’ve been working in this space for about 10 years,” she says. “I had read so many what I considered to be quite ill-considered, not-very-well-researched hot takes. I felt there was a lot more nuance in here than people are realising.” She laughs. Newspapers “don’t want people to have cognitive dissonance at the end of the oped”.

AI platforms need large data sets from which to generate their responses. Newer iterations, such as Suno, Uido and GPT-4o – which powers the ChatGPT chatbot developed by the Open AI research organisation – get much of that material from the internet.

Walshe’s early AI projects worked from data sets she created herself. One of her first projects was called the Text Score Dataset. Text scores are a type of avant-garde musical score, first popularised by composers such as John Cage and Cornelius Cardew in the 1960s, that gave instructions to performers in often poetic and ambivalent words rather than in musical notation. Walshe was working with earlier versions of the AI platforms that have become well known today.

READ MORE

Microsoft “let me have access to Microsoft Azure, as it was [called] at the time,” says Walshe. “I also contacted Open AI, and I was on an early version of one of the GPTs ... I have an assistant called Ragnar [Árni Ólafsson], and I think his brain has been completely rewired through having to do all the data entry for that. Ragnar and I, when we were walking along the street, we’d see a text instruction [on a sign] and we’d go, ‘Oh, that should go on the data set.’ That project was launched in 2021 ... There would just be no point now, because we could just dump all the stuff into ChatGPT or Claude.”

Walshe comes from the Cagean tradition of embracing uncertainty in music, so AI was a natural fit. “When somebody like John Cage is using chance, he’s using it to try to jog himself out of his own stylistic and artistic roots, to give himself some sort of fresh inspirations. And so I think myself and a lot of the artists that got interested in this early, we thought of this as a way to experiment and to try new things.”

What does she think of the publicly accessible platforms that produce pieces of music in any genre to order? “There would be all these hysterical articles where people are, like, ‘Oh my God, it’s over.’ People love saying ‘It’s all over.’ It’s almost like they hate artists ... Udio was the first one where I thought, ‘Oh, we’re fucked.’” She laughs.

Walshe still enjoys experimenting with them. She recently spent a bit of time creating different versions of Kurt Schwitter’s sound poem Ursonate with Udio.

In her essay she writes about how much of the publicly available AI work is enjoyable pastiche. “There’s all this Frank Sinatra AI, where a human has done a really clever jazz arrangement of Gangsta’s Paradise [by Coolio] and a human who has a really good voice has performed that vocal trying to approximate Frank Sinatra,” she says.

“And then they’ve taken a voice model that they’ve trained on Frank Sinatra’s voice and run that human performance through that voice model. It’s not like they just typed in, ‘Frank Sinatra singing Gangster’s Paradise,’ and it just popped that out. There are levels of human ingenuity and skill involved.”

Walshe categorises this work as a type of fan fiction. “When somebody takes a Kendrick Lamar diss track and reworks it in the style of a 1970s funk track and puts it on YouTube, that’s a type of creativity, because it’s working with the fandom, making clever, fun decisions ... Gen X understand it because of remix culture and sampling and mash-ups ... It’s not something radically new. It’s one thing that people are doing with [the technology] that allows people to sort of reach out and connect with one another.”

She also enjoys a YouTube channel that presents AI-generated songs as historical artefacts. “There’s one called I Glued My Balls to My Butthole Again, and it’s done like a 1950s skiffle ... Then there’s a 1980s song that’s called It’s Time You Took a Shit on the Company’s Dime. That’s something that you can do very easily with Suno or Udio ... If you came up with the idea for I Glued My Balls to My Butthole Again, and you saw that through and played around with it, it probably [takes] 20 minutes to make ... Is that art? People will say that the art is having the idea.”

For some fans of AI music, the skilled elements of music that people learn over time as a craft are intrinsically elitist. “They’re trying to say there’s been structural inequality that has prevented people from making music and now, finally, these platforms are going to allow you to make a song called I Glued My Balls to My Butthole Again and that will be a true picture of human creativity.”

Walshe is unconvinced. The poorest communities have made music for millenniums, she says, because even though playing music requires work, the process brings joy. “If you really wanted to unleash the creativity in every human, you could take the [billions in] capital funding and simply buy children instruments. My grandparents were working-class people. They had a piano, which is a very sophisticated piece of musical technology. There wasn’t this feeling that music was inaccessible.” If somebody is “able to describe a track that they think they’d like to have churned out, I don’t think that that track being delivered to them is actually ‘unleashing their creativity’.”

She also notes that the prosaic “prompts” that users need to employ to order music from AI algorithms are very different from the way musicians or listeners have traditionally thought, spoken, felt and written about music. “It’s completely reductive ... You’re not describing how it felt to listen to it; you’re not describing what the music did to your body or what it means culturally.”

The AI version of creativity is also based on the notion that artists start with a fixed outcome in mind. This is at odds with how most artists work. Walshe prefers it when the system she is using breaks down and produces unexpected results. She thinks that as these AI systems “improve”, and so become more predictable, they grow less interesting artistically. “I’m a free improviser. I’m used to the idea that there’s a bunch of noodling for five minutes, and then something starts to emerge, and then you pick those threads apart. I have very different interests to a kid who’s just trying to make a cool techno track.”

In her project Ultrachunk, a collaboration with the artist Memo Akten, Walshe performed with a painstakingly created AI version of herself. The public could also interact with her avatar as part of an installation that was an uncanny experience for her. “There was something really weird about watching strangers sock-puppet you,” she says. “And it’s you on a bunch of days where you didn’t wash your hair, or you were backstage just before show ... and you have extra teeth and extra eyes and stuff.”

Walshe is very aware of the more negative uses people might have for such technology. “If you’re a kid that’s growing up now, and starting to leave a data trail about yourself, you’re going to have to contend with the fact that your schoolmates could decide to bully you by making fakes ... And there have already been cases of sexualised-images abuse in schools.”

The human image and voice feel intimately linked to identity. Walshe recalls when the singer Tom Waits sued the snack-food company Frito-Lay for using a soundalike in an advertisement. Most recently, Scarlett Johansson took issue with Open AI’s use of a voice that sounded very like hers for its new ChatGPT assistant. The organisation’s chief executive, Sam Altman, even tweeted the word “her”, in an apparent reference to the film starring Johansson as the voice of an Alexa-like digital assistant. (Her, which Spike Jonze made in 2013, is a dystopian tale about the downside of artificial intelligence; referencing it to launch an AI product implies poor comprehension on Altman’s part.)

“When it’s Frank Sinatra singing rap songs, it’s high/low culture and dead/live culture, and African-American/white culture, and all those things that make your brain explode,” says Walshe. “That’s fantastic. When it’s somebody modelling Joe Biden’s voice to robocall elderly people telling them not to vote, that’s terrifying.”

This ease with which these new systems can create content also leads to a surfeit of mediocre and weird content online. Walshe calls this gunk. The internet is already flooded, she says. “There’s a long-form essay that 404 Media published about what they call the zombie internet ... They did this deep-dive into all these bizarre pages on Facebook where it’s AI-generated pictures of Jesus and AI-generated pictures of children from the global south who have built cars out of water bottles. Really weird, niche stuff.”

Much of the “gunk” is sexual in nature. “My collaborator Jon Leidecker says ‘All information wants to be porn,’” says Walshe. She cites a crowdsource campaign for a tech company called Unstable Diffusion, which wanted to create images that were “30 per cent naked ladies, 30 per cent porn and 30 per cent anime. That isn’t even a real female body. It’s like a weird version of the female body.” She mentions another generative-AI platform where people were making “Nazi anime pornography of women on all fours as pigs ... There’s an entire generation of boys whose minds are just going to be very melted”.

The flood of new material further pollutes the public data sets from which these AI platforms generate these results. Public data sets are already problematic. “Trevor Paglen did this fantastic ImageNet Roulette project with Kate Crawford, who wrote the Atlas of AI,” says Walshe. “It was a deeply political project about [how] the image data set on which loads of the AI networks are built is deeply racist ... And in response to that artwork the data set was pulled, and they said, ‘We need to clean it up.’ A lot of the time the artists are doing the Lord’s work. They’re engaging with it politically. They’re trying to interrogate what’s happening.”

In some ways, says Walshe, critics of AI are focusing on the wrong things. It’s a human problem, not a technological one, she says. It’s about the humans who want to use the tech, the humans who should be regulating it and the humans who profit from it. “Every step of the way, these decisions are made by humans.”

She thinks there are some areas where AI music will flourish. For friends working in commercial music, she says, their commissions already sound like AI prompts. “The directors say, ‘I really liked the Stereolab track. Can you make it sound like that?’ I can see AI being used in ways like that. It’s the new muzak in many ways, [designed] to be junk in the background. There’s lots of music we hear every day we don’t listen to in detail. We just let it wash over us.”

Walshe doesn’t believe humans will ever lose the desire to make music with their own voices or with physical instruments, because the things that are difficult about making music are the things that make it meaningful and fun. “In London there’s an amateur choir called Musarc. I’m doing a project with them at the moment, and they just all like being in the same room together, singing.”

The musician and musicologist Christopher Small “calls it ‘musicking’. It’s an activity that we do together. I don’t think there’s ever any danger that that’s going to disappear. I don’t think there’s ever any danger that people aren’t going to want to write music from scratch. There are people for whom that’s a really interesting problem to solve. We could all just go to restaurants every single night, and we could buy prepackaged food every single night, but people still love to cook.”

Walshe is unsure what these AI music platforms will ultimately be used for. She doesn’t think that all their creators even know. “They’re just dumping it into the public domain and seeing what happens,” she says. “There’s probably going to be uses for this tech that we can’t quite foresee yet. One of the exercises I do when I’m doing workshops about AI is I say to people, ‘Write down what would help you and what would actually be useful. What would you be willing to pay €10 a month subscription to?’ And generally what people are willing to pay for is not what’s available right now. I think maybe there isn’t enough dialogue with artists. They seem to be solving a problem that doesn’t need to be solved.”