“I’m a realist,” Patricia Scanlon says when asked where she stands on the spectrum of excitement to dread in relation to how artificial intelligence might transform society in the coming years.
It’s a suitably cautious response from the chairwoman of the recently appointed Government advisory council on AI. Pressed further as we sit in the corner of a busy Dublin coffee shop, it turns out that, for all her accumulated expertise and insight, she reckons we simply have no way of knowing for the moment.
Having spent more than a decade developing speech recognition systems intended to help children play and interact in educational settings, the engineer and tech entrepreneur is clearly a believer in the rapidly developing technology’s potential for good. She points to social media, though, as evidence of just how swiftly the brighter, better lives stuff can go so badly wrong.
Just a few months on from having sold her successful AI start up, SoapBox Labs, to US education company, Curriculum Associates, she says the European Union AI Act will provide “a framework” of much-needed regulation but that, given the speed of advances in the area, there needs to be an ongoing assessment of developments and adjustments to the “guardrails” to maximise the enormous potential benefits to society while mitigating the substantial risks.
‘Why wouldn’t I vote for Gerry Hutch? All that money being pumped into bike sheds and phone covers. We’re struggling’
David McWilliams: The potential threats to Ireland now come in four guises
Cliff Taylor: There’s one question which none of the political parties want to answer
‘I know what happened in that room’: the full story of the Conor McGregor case
Dr Scanlon has been around the area for quite a while, having worked with Bell Labs before starting SoapBox. She has been Ireland’s Government-appointed ambassador for AI for almost two years.
[ Half of Irish managers ‘don’t fully understand’ AI potential - surveyOpens in new window ]
The intention behind that appointment, she suggests, was that she might help the State have a conversation about AI while her motivation for accepting it included a desire to see that the new technologies would be ethically employed.
Asked about the potential for AI to actually reinforce bias in areas like hiring, insurance and even, in the United States, criminal justice sentencing, she acknowledges the reality of the concerns expressed but apportions blame to the people selecting whatever data has been used to train the various systems.
“An AI system is never going discriminate,” she says. “It’s just an algorithm trained on data. If the data shows bias – too much or too little representation of particular socioeconomic backgrounds... race, ethnicity, you know, any kind of bias – it’s going to perform less well, producing more errors whether it’s false positives or false negatives. That goes for every system.
“So when people talk about bias in AI, it’s about carelessly building AI and not paying attention to the data distribution. That builds biased AI.
“But the interesting thing is that you can take that a step further and say that if you thoughtfully build AI systems and ensure a lack of bias in how you build it, you can actually create objective systems... something that might scan a student’s CV but not take into account their address in a socioeconomically deprived area, not take into account their gender. It could actually very objectively look at the pros and cons of a CV. I think people would agree that that would be a good thing.”
The issue was a very real one at SoapBox Labs where, she suggests, it was crucially important the systems aimed at benefiting kids didn’t end up disadvantaging them in any way.
“We spent a lot of time and effort to make sure it worked. Because we were thinking that if we’re going to bring AI into the classroom, we have to be damn sure it’s not [going to] not work for a certain group because of their economic background or accent or something like that.”
I think everybody’s acutely aware now after what’s happened in the last couple of years that these things can evolve very quickly
It is just a small part of the perspective she brings to her new role as chairwoman of the new 14-strong advisory committee on AI, established under the auspices of the Department of Enterprise and Employment. The membership features a wide range of expertise and a lot of different perspectives with members from the academic, commercial, legal and tech sectors.
Dr Scanlon said the intention is for the committee to address issues it regards as important but also to provide support to Government departments and other agencies in need of expertise relating to AI.
She expects there will be differences of opinion among the membership but that it is clear from their initial engagement there is a considerable desire for the group to make a meaningful contribution to what she regards as a much-needed national engagement on an issue of huge societal importance.
“I think everybody’s acutely aware now after what’s happened in the last couple of years that these things can evolve very quickly. I think people are very aware that the technology is not going to be static and we have to keep up with it. It’s about staying attentive… and flexible. The committee will hopefully help people in Government and others to do that,” she said.
Dr Scanlon says that while the union’s AI Act, which is nearing the end of its legislative journey, will provide a framework to serve as a starting point, regulation of AI will have to evolve with the technology.
She acknowledges some would prefer to see minimal regulation, or even none at all, but believes the technology is simply too powerful, its potential to harm too great for safeguards not to be put in place.
“We all admit how powerful this technology is and anything that powerful needs to be regulated,” she says.
“I think anybody arguing against that doesn’t really have very strong legs to stand on because the ‘it’s for the benefit of society’ argument and that ‘trust us’ attitude didn’t really work very well with social media. We’ve ended up with a situation where we now know how detrimental that can be to the mental health of teenage girls for example and still there are still no guardrails.
“And that’s arguably a much less powerful technology than what we’re looking at now.”
The Act, she says, will be a good starting point “and you can build on that when there is more knowledge about how the technology is evolving. It will change. It’s not going to be a static piece of legislation. It’s going to change over time.”
She acknowledges the argument that the regulation stifles innovation. “But I personally disagree. It allows for research to proceed and I think if you look at the medical sector, at fintech or the finance sector, they’re heavily regulated but nobody would argue they don’t innovate.
“You wouldn’t throw up a building or a bridge without some kind of compliance or regulation because it’s dangerous. There needs to be a certain amount of caution where something that can harm people is concerned.”
For businesses, she says, as for the rest of society, there are huge potential benefits from this new technology but concerns too. It is simply too early to predict the precise nature of the technology’s impact on the workplace, she says.
“I always think of myself as a realist and my view is that we just don’t know. It would be great if we only got the benefits and we were highly successful at mitigating the risks. But I do believe, if we’re not careful, we’ll end up introducing unintended consequences. That could be anything from propagating existing biases to absolutely disenfranchising groups. So, I think it’s reasonable to put in the guardrails.”
Investment will be needed, meanwhile, to maximise the benefits but work remains to be done in areas like the widespread reskilling that is expected to be necessary as jobs evolve.
“You can argue back and forth about how this is all going to work out, that employers will be able to do more or hire less. I’m not sure I agree with the people who point to previous technological leaps having produced extra jobs and say this one will too. We’ve never had this before so I don’t think we don’t know.
[ ‘It is going to have a huge impact’: What does the march of AI mean for our jobs?Opens in new window ]
“I think we need to prepare ourselves for any eventuality. We need to make sure that people talk about skilling up enough people. But we have to ask questions like ‘can these people be skilled up? Is it practical? Are we looking at the demographics? Are we looking at the education of people? Is it possible? What are we skilling them to do?’
“We just need to take a little bit of time and do an analysis of this. To look at what’s the realistic prospect and how we can be best prepared. This technology is not something we can stop, but it’s definitely something we can prepare ourselves better for.”
In the meantime, businesses need to be focused on the ways they can be impacted, positively or negatively, by the technological changes happening around them.
“It’s important for every business to recognise that AI can help you but you also have to be careful. If you thought you had a USP [unique selling point] in the market, or that you had a lead, is it likely that AI is now going to help a competitor catch up with you? And if it is, what are you going to do about it because you may actually have access to the expert knowledge required to make another leap forward.
“We operate in a global market and a lot of local businesses here do sell globally. If you aren’t looking at AI and what it can do, you can be sure one or more of your competitors are.”
- Sign up for push alerts and have the best news, analysis and comment delivered directly to your phone
- Find The Irish Times on WhatsApp and stay up to date
- Our In The News podcast is now published daily – Find the latest episode here