Subscriber OnlyIreland

AI could be the next industrial revolution - but what risks does it bring?

AI is ‘a hugely powerful tool that we need to spend time studying and mastering – before it masters us,’ says Minister for State Ossian Smyth

The emergence of ChatGPT has sparked debate about the potential impact of AI. Photograph: Alamy/PA
The emergence of ChatGPT has sparked debate about the potential impact of AI. Photograph: Alamy/PA

This week, US president Joe Biden told the ranks of politicians and dignitaries in Leinster House that the world is at an “inflection point”.

“The choices we make today are literally going to determine the future or the history of this world for the next four to five decades,” he told the joint Houses of the Oireachtas. To emphasise his point, he referred to artificial intelligence: “It holds enormous promise and enormous concern.”

His underlining of artificial intelligence (AI) comes at a time when the emergence of ChatGPT has sparked a messy, overwhelming and sometimes overwrought debate.

US senator Chris Murphy last month tweeted a claim that ChatGPT had taught itself to do advanced chemistry. “Something is coming,” he warned. “We aren’t ready”.

READ MORE

Consensus about what, in fact, is coming is hard to come by – but many commentators agree that the pace and scale of change brought about by rapid advances in AI, and the promise of more to come, are enormous.

Dr Seán Ó hÉigeartaigh, an Irish academic who is director of AI at the Futures and Responsibility programme at the University of Cambridge says the transformative potential of AI is “of the order of the internet – and I think there’s a possibility in a 10-year time frame, if progress continues, that we’re looking of the order of the industrial revolution”.

Ó hÉigeartaigh admits to being on the bullish end of the spectrum, but his view is not dissimilar to some senior figures in the Government. One senior source said it was similar to the advent of nuclear power. Another senior figure said the technology was “transforming how we all live without us even talking about it”.

As the AI debate gains currency, more questions are being prompted about the impact on people’s digital and physical lives, the security of their work and the social, political and economic consequences of rapid change at a predicted scale and speed that is hard to fathom.

“The real question is what’s happened in the last two months,” says Richard Browne, director of the National Cyber Security Centre (NCSC), the State agency charged with scanning the digital horizon for threats. “Everything about the policy field here is challenged by the speed at which this is happening.”

‘Perfect storm’

Recent advances in AI have been enabled by what Dr Edward McDonnell calls a “perfect storm”. McDonnell, director of CeADAR, Ireland’s national centre for applied AI, says the availability of enormous amounts of data on which new systems can be “trained” has come at the same time as “phenomenal advances in the amount of computer power that’s available”. He says it is “a discontinuity in the process where there is a huge leap forward”.

The potential benefits are massive: optimists sketch out a future where almost all sectors of society benefit from huge productivity gains. “We need to embrace it and not condemn it,” says a senior Government source.

However, warnings are also stacking up. The Financial Times this week published an article by an AI investor imploring “we must slow down the race to God-like AI”; one paper published by a US AI researcher warned that “natural selection favours AIs over humans”. Some reports have been downright chilling: Belgian newspapers published details of a man who died by suicide after interacting with a chatbot about climate change for weeks; US activists published screenshots of interactions with an AI-powered experimental Snapchat service which offered advice to a 13-year-old girl planning a sexual encounter with a 31 year old, and a child seeking to cover up bruises before a visit by protective services (in both instances, adults played the part of children seeking advice).

There is a growing school of thought which argues that the speed of recent developments means optimism should be tempered with a realistic assessment of how life may change, and how this needs to be planned for.

“I’m enthusiastic and troubled,” says Cambridge University’s Ó hÉigeartaigh. “Things are moving very quickly at the moment and we’re nowhere near prepared for the impact of it. He believes AI could relieve a lot of the “drudgery of modern life” and be massively helpful to professionals. However, it also could make wide swathes of work less, or only marginally, economically viable.

Different sectors need to be included “to make sure that AI isn’t just a thing that happens to them but that they are shaping the governance of it as it affects their sectors”.

“We as a society, including those of us who are thinking about the fundamental impact of these technologies, haven’t had enough time and are struggling to keep up with the pace at which things are happening.”

Guidelines

Earlier this week, Minister of State for cybersecurity Ossian Smyth said he had asked the NCSC to draw up public-facing guidelines for a world laced with new or heightened risks arising from the proliferation of AI technologies. He says “a wave” of disruption may happen, including in fraud – where previously one-to-one scams can proliferate, underpinned by sophisticated AI.

Another is within politics, where so-called ‘influence networks’ can skew debate through the use of AI-powered bot armies, or through the rapid production and dissemination of deepfakes and other forms of misinformation. “It’s about the scale and volume that can be produced,” says Joseph Stephens, head of engagement with the NCSC. “It’s really just accelerating the threat picture that is already in place.”

There are fears that politics has a recent track record of vulnerability to new threats. Mark Brakel is director of policy with the Future of Life Institute, a non-profit which aims to mitigate risk from technology – part funded by Elon Musk historically, and also benefiting from significant support from Vitalik Buterin, founder of cryptocurrency ethereum. He says social media should be thought of as humanity’s first contact at scale with “really simple AI systems” which are becoming much more sophisticated. “We should learn lessons from how badly we’ve done on regulating social media to get ahead of the curve this time,” he says – warning that social media was greeted with huge excitement and some regulatory efforts to “tinker around the edges – and a few years later we woke up to a broken political system”.

Regulation

There have inevitably been calls for more regulation of AI. In Washington, Senate leader Chuck Schumer is taking early steps towards regulating AI, while Brussels is somewhat ahead of the game and has been working on a draft AI Act for two years – it is now scrambling to accommodate these efforts to the latest developments.

Ireland has an AI policy dating from 2021, with the Department of Enterprise holding overall responsibility. Other parts of the State apparatus are also involved – the NCSC has been factoring AI into its threat assessments for several years.

It is understood that security officials have been interacting with multinationals based here and are in the process of drawing up more guidance for State agencies, while a midterm review of the current cybersecurity strategy due in weeks is expected to place an increased emphasis on AI.

While the threat is vivid, security officials also say it has not yet featured as a primary risk on assessments shared with Ireland by international actors or friendly governments. It is yet to punch through to regularly circulated lists of the most pressing threats faced by the State.

The NCSC expects that the threat, when it materialises, won’t come in the first instance from the technology itself – but from a deployment of it by a bad actor such as a criminal organisation or a rogue state. “The real challenge is those countries and entities who don’t comply with international law,” says NCSC director Richard Browne. The NCSC also argues that AI tools allow it to become more effective, even as they may empower bad actors.

“It’s basically a bit of an arms race between both sides. And, like all new technologies, it’s a double-edged sword,” says Stephens.

The real challenge, Browne says – while emphasising that there is no suggestion this is currently happening – is “what happens if tools and technologies develop at an exponential rate, and that the things used to defend networks – the processes, the tools – become redundant, essentially, and you’re caught within a technological revolution rather than evolution”.

Call for pause

The Future of Life Institute last month published an open letter calling for the immediate pause for at least six months of the training of AI systems, signed by Musk, Apple co-founder Steve Wozniak and others. Ó hÉigeartaigh was among them – he says it is crucial a space for debate emerges. While he and many others are sceptical about the imminent threat of an “artificial general intelligence”, equivalent to human-level intelligence, he says the speed of recent advances mean “these are questions that now actually need to be taken somewhat seriously”

“The best shot we have right now is the EU AI Act, regulators being adequately funded and resourced,” says Ireland’s AI Ambassador Patricia Scanlon – who describes herself as an advocate for ethical AI. “I’m really happy this came out when it did because it’s shining a light on the gaps in legislation,” she says. In the face of rapid change, Scanlon is wary of firm predictions – “Anybody who comes out and says exactly they know what’s going to happen in the next five or 10 years, I don’t think they’re being truthful,” she says. But warns that there is a window, nonetheless, to prepare.

“I will get nervous if I feel people don’t take it seriously, allow it to run rife or bend to lobbying on this”.

“I think we have time and I really hope we all take it seriously at this point to get it right later”.

Minister of State Dara Calleary, who oversees the policy, said the EU legislaiton will “put in place the guardrails for the use of AI” and that Ireland was “actively engaged” in Brussels on the development of the policy.

Smyth, the minister for cybersecurity, warns more bluntly: “It’s a hugely powerful tool that we need to spend time studying and mastering – before it masters us.”