Addressing the challenges posed by artificial intelligence (AI) and other transformative technologies is not so much a case of having the right answers but asking the right questions early enough. That’s one of the central themes to be explored by Dr Jack Stilgoe in this year’s UCD Michael Smurfit Graduate Business School Annual Laurence Crowley Lecture on Wednesday, January 24th.
Dr Stilgoe is a professor in science and technology studies at University College London where he researches the governance of emerging technologies. “Science and technology are increasingly powerful,” he notes. “That power would ideally be accompanied by responsibility, but we often realise the risks, inequities and missed opportunities of innovation only in hindsight.”
That’s by no means a 21st century problem. “When faced with technology, people can believe that all of the challenges associated with it are new,” he explains. “That’s not usually the case. There’s a lot we can learn from things which have happened before. When we consider responsible innovation, the problem goes back to Frankenstein or even further to Archimedes and others.”
Accompanying the power of innovation with responsibility is not the job of the innovators, he adds. “It’s not necessarily the responsibility of an Archimedes, a Victor Frankenstein, or a Henry Ford. Society has to govern the innovations.”
Why an SSE Airtricity energy audit was a game changer for Aran Woollen Mills on its net-zero journey
Getting solid legal advice early in your company’s journey is invaluable
Water pollution has no one cause but many small steps and working together can bring great change
Empowering women in pharma: MSD Ireland’s commitment to supporting diverse leadership
However, society has tended to come at it too late and found itself playing catch up. “The challenge is to better anticipate the risks and opportunities presented by new technologies and to ask questions at an early stage.”
He commends the EU for its efforts to create a governance framework for AI in the form of the AI Act. “There is a huge amount of hype and excitement around AI. A lot of it is justified, but there are still questions to be answered. How do we have discussions at an early enough stage where we can steer the direction of innovation? The EU has been trying to come up with rules for the new technology and is being attacked by both sides to a certain extent. That’s understandable. They should get attacked – that’s probably part of the process of coming up with rules. It clarifies the issues. The attempt to assert public interest is commendable.”
Interestingly, he does not believe it is inevitable that society will always lag behind innovation when it comes to governance. “Often laws already exist to govern it but aren’t enforced,” he points out. “That is a policy choice. Governments choose to leave innovators to their own devices and struggle to catch up after that.”
A case in point is intellectual property (IP) and the laws which exist to protect it. “There are huge questions around IP and its use in AI, for example. AI is being trained on IP which is then being used in ways which have not been authorised by its creators. There are laws there to protect IP and to ensure it is not used in such ways, but the AI creators are saying they don’t apply to them. The law playing catch up is itself part of the story. The lawmakers being in thrall to the innovations is another part of it. Part of what the EU is trying to do is decide on which rights and protections already in place should be applied. It’s not tearing up the rule book and coming up with a new one.”
Opponents of regulation and legal governance usually advance the claim that imposing them would hold back innovation. “They would say that wouldn’t they,” says Stilgoe. “Very similar arguments were made about GDPR by many who said it presented an existential threat. The story goes that regulation is in opposition to innovation. But innovation is always a product of a particular model of governance. It doesn’t matter whether it’s the US, UK, European or Chinese model. Regulation and innovation needn’t be in opposition. We see all sorts of situations where good regulation has led to innovation that otherwise wouldn’t have happened. Green energy is one example where new technologies and industries have been created as a result of regulations.”
He also notes that many people have forgotten that social media platforms exist today largely because of a law passed in the US which said they were not responsible for the content uploaded to them by users. “Politicians are responsible for this. These companies are a construction of a law which is there because of a societal choice. That law enabled huge innovation but has also had adverse consequences we are now living with.”
Of course, one way to address those consequences would be to change the law to make them responsible for content in the same way as publishers, but it’s not quite as simple as that now that so many years have passed.
“Millions of people now depend on the social media platforms for their livelihoods, and we are dependent on them on many other ways,” he says. “That dependence brings its own risks, of course.”
Not all the innovators are opposed to regulation, however. “Being seen to be responsible and trustworthy is in AI’s long term interest. Unregulated systems are unlikely to alleviate the concerns of the public. They won’t be seen as useful or reliable. A lot of the industry is in favour of regulation.”
He counsels caution in relation to what others may wish for. “There is a risk of regulatory capture if it is not done in a wise way. If you go from zero to hero in a very short period of time like many of these companies have, you want to protect your current situation. If you listen to the current leaders, you may introduce rules to suit them and shut out competitors who have new and better innovations.”
He also advises care in relation to the terms of the debate on the technology. “AI developers are happy to discuss the risks of AI in quite peculiar science fiction type ways. They will discuss existential risks. This is either irrelevant or deliberate misdirection. One of the reasons the existential risks get so much attention is that they’re exciting and media friendly. They are also apolitical and that allows the politicians to avoid the messy business of inequity.”
The danger is that the argument is about the impossible rather than what’s actually happening at the moment.
Despite the challenges presented by the power and rapid pace of development of AI, Stilgoe is not despondent. “The technology is not magic,” he says. “It is being made by people with certain purposes in mind and can therefore be steered in a better direction. We will see researchers, scientists and others ask questions to help us make sense of the technology and mitigate the risks of misuse and so on. I am cautiously hopeful. I don’t presume regulations can come up with the right answers, but I do strongly encourage the regulators, politicians and others to continue asking the questions.”
And the questions shouldn’t be confined solely to risk. “There is this notion that AI will free us from the drudgery of work or open up new opportunities. These things won’t happen naturally or on their own. But we can ask what it will take for them to come about. Rather than just asking if AI will be good or bad, we can ask what it would take, and who needs to do what to address some of these challenges and opportunities. I am not saying there isn’t some catching up to do, but there is still time to ask the right questions and that will help us guide the innovation in a responsible direction.”
The 2024 UCD Smurfit School Laurence Crowley Lecture Series will take place at 6:30pm on Wednesday, January 24th. This event is free and open to the public. Click here to register