Subscriber OnlyUnthinkable

This philosopher foresaw the biggest problem with AI. Now he’s sounding another warning

Unthinkable: Things happen every day for which no one is held responsible and artificial intelligence is set to make things worse

Twenty years ago, philosopher Andreas Matthias identified what was then a new ethical question: who is responsible if artificial intelligence (AI) does harm? His warning about a “responsibility gap” – first articulated in what University of Galway lecturer John Danaher describes as an “an obscure article in an obscure journal” in 2004 – is today generating lively debate among researchers as ChatGPT and other AI tools gain influence.

Matthias, who worked as a computer programmer before branching into robot ethics, drew attention to the “increasing class of machine actions” over which nobody has sufficient control to assume responsibility. This constitutes “the responsibility gap” as it would be unjust or unfair to blame humans for actions that weren’t theirs.

Speaking to The Irish Times two decades on, Matthias – who is based at Lingnan University in Hong Kong – says: “Today, I think, we have a much wider, more general responsibility crisis. We still don’t have an accepted solution to the ‘responsibility gap’, but additionally we have introduced more and more AI into our lives that works in ways that we cannot predict.

“The responsibility gap is widening into a responsibility abyss before our eyes, and we are, at the moment, unable to stop or contain it.”

READ MORE

It’s a grim analysis. But here’s a radical thought: is the responsibility gap such a big problem? Danaher argues that “responsibility gaps are not always a bad thing” and “might sometimes be a good thing”.

The Irish researcher highlights a couple of downsides to moral accountability. First, it can give rise to an unhealthy fixation on “blaming and punishment” – think of the negative role shame has played in Irish society.

Second, holding someone responsible seems unfair in the case of “tragic choices” where you have to pick between two morally equal, or near-equal, actions. There are lots of situations where “doing the right thing” is unclear and AI could be seen as an honest broker that avoids finger-wagging or judgmentalism.

Danaher gives the example of “medical triage decisions – such as the decisions on ventilator allocation that confronted many physicians in the early days of the Covid-19 pandemic”. Such tragic choices can leave a moral “taint” or stain on human decision-makers, and AI could mitigate that cost.

What does Matthias think? Would there be a benefit in saying “let the algorithm decide” for some decisions?

“But then we’d have to know what the basis for the AI’s decision will be. As humans, we don’t disagree just because it’s fun to disagree. We disagree because we have fundamentally different values, based on our cultures, our religious beliefs, our political stances. ‘Let the AI decide’ is even worse than ‘let the dice decide’,” he replies.

“In the case of dice, it’s at least a purely random decision. In case of AI, it’s a decision based on a US or Chinese ideological foundation . . . AI is not impartial or fair. It is only obscure, which sometimes might look like impartiality because we don’t see how its decisions come about.”

While Matthias doesn’t see any upside to the responsibility gap, he does agree it should be seen in a wider context.

“Most of what keeps our societies running is now already far removed from the possibility of responsibility ascription. We blindly trust that somehow things will work.”

He cites the example of Google Maps which we assume to be infallible until it sends us the wrong way. The last major financial crash, and a failure to hold those responsible to account, also comes to mind as we reach the 15th anniversary of Ireland’s bank guarantee this week.

“Our societies have already sleepwalked into a responsibility gap that is much bigger than that generated by AI,” says Matthias.

An open question is whether the responsibility gap is having a knock-on effect on human behaviour. When Bank of Ireland’s ATM glitch struck last month, for example, there were queues of people seeking to take out money that wasn’t theirs. Is the general lack of accountability in society making us less ethical?

More specifically, could the spread of algorithmic decision-making give rise to a kind of moral cynicism?

“I think that we are past that stage already,” Matthias replies. “We are used to things happening every day for which no one is held responsible.”

He points to the €13 billion Apple tax dispute, and says “while people everywhere are harassed by their governments every year if they fail to declare a few hundred euros in income”.

He namechecks German multinational BASF, among the world’s largest chemical producers, which along with other industry giants have successfully stymied regulation of “forever chemicals”. Such firms “can go on producing poisonous chemicals in the EU, while I will be fined if a few drops of oil leak from the bottom of my car . . . We live in a very unequal world, and this is why many people have already given up on it.”

Having AI act with impunity can only make matters worse. But since he first wrote about the responsibility gap, Matthias is more firmly of the view “AI is not our main problem”.

“The problem is to straighten our capitalist system in general, to bring social justice back, to re-introduce a sense of responsibility and accountability to society that does not exclude industry lobbies, politicians, multinational companies like BASF, Apple and Google, and the rich who can afford to ignore every rule.”