Subscriber OnlyArt

Will AI take over art? ‘No amount of sentimentality is going to stop it from happening’

This lifting of the barrier for AI in the art and entertainment industries might sound exciting for some, but for others the developments are setting off alarm bells


Zack London is reluctant to describe himself as an artist. At least for the work he’s most known for. The 33-year-old American is by many accounts a talented illustrator, but to almost 500,000 followers online he’s better known as the man behind Gossip Goblin.

Under this Instagram alias London shares videos starring Fredvog, a fictional adventurer of his creation who looks a little like Gandalf in a red gnome’s hat. These popular shorts see the protagonist encounter all sorts of absurd and Tolkienian creatures: goblins, trolls, tiny civilisations living atop wild truffles, ominous beings from the underworld, a holy cat empire and their age-old adversaries, the mole men.

It’s all a bit surreal, but there’s something compelling about the narratives London creates. What’s unique about his work is that, from the soothing voiceover to the photorealistic visuals, it’s all created using artificial intelligence (AI).

“It’s radically different than the stuff I’ve been doing my whole life,” London says from his home in Stockholm. As a UX – or user experience – designer he has kept a keen eye on developments in the tech world; two years ago he realised that a computer could produce artwork of a similar quality to a piece he might spend more than 100 hours on. So London began experimenting with generative-AI tools.

READ MORE

Generative AI is a type of artificial intelligence that creates content – such as text, music or images – by learning patterns from existing data sets and then generating similar content from user prompts. ChatGPT, the world’s most popular generative AI, receives more than 1.8 billion site visits a month. That’s more than Netflix.

London uses a mixture of affordable software from Midjourney and Runway to create images and animate them for his videos, then pairs them with his own script read by a computer-generated voice. He doesn’t monetise his content but has sold commissions from his website. Similar works combining familiar pop-culture franchises with wacky twists have flooded the internet in the past year or so. One video depicting Harry Potter characters in a Balenciaga-themed montage has racked up 12 million views. Or how about four million views for Lord of the Rings by Wes Anderson? It’s rudimentary and fun, but its potential for disruption in the art and entertainment world is difficult to overstate.

“When you see art in the world,” says London, “there are some credentials behind the person putting it out. There’s bad art, there’s good art – whatever – but we can see that people put time into it. AI strips all those barriers to entry away. Some ‘AI artists’ will claim that there is a mastery in prompt engineering. But as a fairly popular AI artist myself, I can say that’s bullshit. There is zero skill involved in generating AI images.”

Yining Shi, who is senior engineering manager and principal research scientist at Runway, says it is hard to predict the future of AI given how fast tools are developing. But such advancements “will make creating professional-grade content accessible to everyone, enabling users to generate complex media with minimal input and greater control”. This “democratisation” of image and video creation, as she puts it, will “foster a surge in creativity and expression”. The company’s goal is to free up time and money for film-makers and artists, focusing on “human augmentation” rather than the outright replacement that has worried many in the industry.

This lifting of the barrier might sound exciting for some, but for others – animators, for example, whose technical expertise looks to be outstripped by a computer’s generative ability – the developments are setting off alarm bells.

Jeffrey Katzenberg, one of the founders of DreamWorks Animation, the studio behind movie franchises such as Shrek, Madagascar and Kung Fu Panda, suggested late last year that as many as 90 per cent of animation artists will soon be replaced by AI. In “the good old days when I made an animated movie”, he told a Bloomburg forum, “it took 500 artists five years to make a world-class animated movie. I think it won’t take 10 per cent of that three years out from now”. He added that those remaining in the industry will still need “individual creativity” to “prompt” software, and predicted that prompting will become a creative commodity.

This tidal wave is already manifesting. OpenAI, the creator of ChatGPT, recently teased its new video-creation tool. Sora, as it is known, uses basic text prompts to generate photorealistic videos that can dupe even the keenest eye at times. It’s not available for public use just yet, but its unveiling has been met with both excitement and worry.

Barry O’Sullivan, a professor at University College Cork specialising in AI and ethics, plays down “overstatements” of a detrimental impact on the art and entertainment industries. “I don’t believe there’s going to be major job-loss issues around AI,” Prof O’Sullivan says. “Certainly jobs will change – sometimes they will change significantly – but overall there will be new forms of employment.” Referencing Katzenberg’s comments, he suggests the demand for animated content could be enough to spread remaining jobs around multiple smaller projects rather than the blockbuster DreamWorks films of the past.

Sweeping redundancies have been forecast for some time but “just simply [aren’t] turning out to be true”, Prof O’Sullivan says. “Over that period of time there are totally new roles and totally new jobs and totally new industries being created all the time.”

How good is AI art?

AI can already speedily generate content that would have taken artists months to complete in the past. But does it have artistic merit – and could it hang easily alongside conventional art? The idea that the answer to these questions may be yes has caused some controversy.

Last year the artist David Lester Mooney created an AI-generated image of four young women in 19th-century costume. The work, titled Throwback Selfie #Magdalene, made it into the Royal Hibernian Academy’s Annual Exhibition and caused uproar in some corners of the internet. That’s not art, some keyboard critics said, that’s “grotesque”.

In another case, an artist’s Midjourney creation won the top prize in the digital-art category in Colorado State Fair’s annual art competition. The success of the renaissance-like space-opera scene, entitled Théâtre d’Opéra Spatial, caused some to proclaim the death of artistry in the face of AI. Almost two years later, such diagnoses from the outcome of a minor category at a relatively inconsequential competition seem overblown.

Zack London is keen to play down praise for those making artistic content with AI. “The ability to create stunning visuals is not something to celebrate,” London says. “I don’t take any credit for it ... For instance, Midjourney has gotten so good that you can type in ‘Beautiful girl. Stunning. Cinematic’, or even just type in ‘girl’, and you will get some Raphaello or renaissance painting with stunning quality. And then people post that [online] as if it’s a reflection of their skill. So when there’s a big backlash against people claiming to be AI artists, I totally understand that, because it has taken the entire element of art that we assume revolves around skill out of the equation.

“I cringe to be associated with this group of people who have donned the title of AI artists like it’s some type of artist. If I type in ‘hot girl’ into Google, that doesn’t make me a software developer.

“Then it comes down to storytelling and creativity and trying to find new angles. If the act of creation itself is so incredibly simple now, then we should raise the bar in what we expect from people when anyone with the most basic understanding of the English language can produce anything.”

Would he describe himself as an AI artist? “I don’t really like the term,” London says. “I was an artist for many years before [AI], so I feel like I’ve proved my credentials. But that sounds kind of elitist.”

London is on the fence about whether AI can produce art at all. He ponders the subjective comparisons between a Rembrandt masterpiece and Levitated Mass, a 2012 installation at Los Angeles County Museum of Art that comprises a 340-tonne boulder that cost $10 million (€9.3 million) to install. “It’s just apples and oranges. And I kind of feel like that’s where AI sits in the context of everything else. It’s like a total non sequitur, and maybe we shouldn’t call it art. It’s just some weird nebulous space that we don’t really know what to call yet.”

Prof O’Sullivan, who also sits on the Government’s AI advisory council, is more assured. “I don’t think anybody in the AI world considers [generative AI] to be artistic,” Prof O’Sullivan says. “What’s missing here is the human. These AI systems do not have any understanding of the world, so the fact that one has prompted a generative-AI system to produce [content] doesn’t mean [it] has any comprehension whatsoever. At a simple level it’s just matching words and phrases in the prompt with things it knows about in a database ... I think the art is as much about the artist’s perception and understanding and comment on the world.” He suggests we will see more artists use the technology to make “meta statements” about AI.

Mary Cremin, head of programming at the Irish Museum of Modern Art, says certain areas of the art world have welcomed AI. “There are artists who embrace technology and use it to create their work or be integrated as part of the work,” Cremin says. “For example, Jon Rafman’s algorithmically generated paintings or Doug Aitken’s new 360-degree video piece that uses a chorus of AI generated voices. The Irish artist John Gerrard works with digital simulations ... In terms of what we classify as good art, it is quite subjective, but interest in digital art is growing rapidly, especially with a generation of digital natives.”

Who owns the copyright?

AI remains largely unregulated around the world, but the European Union has taken the lead in policing its development with its AI Act. A long-standing concern for artists is copyright. Generative-AI models crawl through digital data sets for images and text to add to their bank of knowledge of the world, and when prompted will regurgitate content based on this material. But these data sets often contain copyrighted material, which raises the question of who owns the finished product. As Prof O’Sullivan puts it, “All artists are inspired by others. But when does inspiration become violation of intellectual property copyrights?”

“This is actually a very tricky question,” says Barry Scannell, an AI-law expert who is a partner in William Fry and also a member of the Government’s AI advisory council. “There may be cases where the use of copyright works in data sets used to train AI systems could be considered copyright infringement, and there are a number of legal cases under way internationally, such as the well-known New York Times case against OpenAI, where copyright infringement is alleged.”

There is similar ambiguity with the content that AI creates. “Ireland and the UK have copyright laws which state that where there is no human author, the person who made the necessary arrangements for the creation of a computer-generated work can be considered the author,” Scannell says. This would be the person who prompted the generative AI. “But if you compare the Irish position to the general European Union copyright-law acquis” – which is to say accumulated legislation and regulations – “I’m not entirely convinced that it would survive a challenge to the Court of Justice of the European Union as, to my mind, it lies so far outside EU copyright law.

“It’s important to point out that this is very much a developing area in jurisprudence, asking questions which have literally never been asked before, and we just don’t have any definitive answers quite just yet.”

AI’s rapid pace of development can make it seem like the wild west for artificial-intelligence engineers. The case for widespread disruption in the art and entertainment world appears strong, and although Zack London is optimistic for artists, he is among those resigned to the inevitability of AI’s encroachment.

“Photography didn’t put painting out of business. I’m sure it affected portrait artists, but it changed the lens through which we looked at painting, and all of a sudden the ability to capture realism wasn’t important and artistic movements drifted towards abstraction and things the camera couldn’t capture,” London says.

“It reminds me of the Luddite movement. They probably had a point burning down all the looms and primitive factories putting weavers out of business, but in retrospect it was kind of absurd ... Putting a banner on your [social media] profile saying ‘No AI’, it just seems like a naive ant-against-a-boulder thing. Like, yeah, it’s f**ked up. But the world’s f**ked up, and this is the inevitable trajectory of what’s happening. No government is going to completely regulate this. This exists in the world. No amount of sentimentality is going to stop it from happening.”