What if AI makes us move in the wrong direction?

Increasing computational power is double-edged

The cabin of a  driverless car. “Machine-learning systems are developed using a pool of examples from which they are expected to learn. Problems arise when the training data doesn’t match the real world”
The cabin of a driverless car. “Machine-learning systems are developed using a pool of examples from which they are expected to learn. Problems arise when the training data doesn’t match the real world”

When an Uber autonomous test car killed pedestrian Elaine Herzberg in Tempe, Arizona, in March 2018, it sent alarm bells around the world of artificial intelligence (AI) and machine learning.

Walking her bicycle, Herzberg had strayed on to the road, resulting in a fatal collision with the vehicle. While there were other contributory factors in the accident, the incident highlighted a key flaw in the algorithm powering the car.

It was not trained to cope with jay-walkers nor could it recognise whether it was dealing with a bicycle or a pedestrian. Confused, it ultimately failed to default quickly to the safety option of slowing the vehicle and potentially saving Herzberg’s life.

It’s a classic example of where AI confuses the map with the territory, as Brian Christian explains in his book The Alignment Problem.

READ MORE

“Machine-learning systems are developed using a pool of examples from which they are expected to learn. Problems arise when the training data doesn’t match the real world,” he tells The Irish Times.

The data is only as good as the limitations of the model. At a very simple level, for example, if you are programming a machine to recognise apples and you only take pictures of red apples, when it comes across a green apple it may label it as a pear.

One of the more popular public data models in recent years was the so-called Labelled Faces in the Wild Dataset (LFW) assembled in 2007. This dataset was inherently biased with 77 per cent of the faces male and 83 per cent white. In terms of classifying faces, AI systems had a 0.3 per cent error rate when classifying light-skinned males but this rose to 34.7 per cent when classifying dark-skinned females.

In his exhaustively researched and thoughtful book, which involved over 100 formal interviews and several hundred informal conversations with researchers and experts in this field, Christian, a visiting scholar at the University of California, Berkeley, summarises the latest thinking and initiatives in this area.

Getting machines to think like humans may be some way off yet but developing a more human-like response is an increasing feature of AI. Technologists are now talking to sociologists and psychologists, while the former are also developing a better understanding of the technologies underpinning AI and how they might be adapted and improved.

Safety

Safety is no longer a taboo subject. One researcher told the author that when he attended one of the field’s biggest conferences in 2016 people look askance when he said he was working on the issue of safety. One year later, nobody raised an eyebrow.

A sense of humility may be coming into the discipline, he agrees. The latest generation of driverless cars are unlikely to make the mistakes of the Uber example above, while in the field of medical imaging if the software isn’t sure of the answers on a diagnosis it will default back to human expertise.

“We want the system to give us its best guess but crucially also some notion of its certainty or uncertainty. That’s a crucial element of making these systems safe.”

Christian admits to being sanguine about AI overall. “We find ourselves at a fragile moment in history where the power and flexibility of these models have made them irresistibly useful for a large number of commercial and public applications and yet our standards and norms around them are still nascent. It is exactly in this period that we should be most cautious.”

Machine-learning systems, he notes, not only demonstrate bias, but may silently and subtly perpetuate that bias.

Take the judicial system. In the US algorithmic tools have been used for decades to make decisions on remand and parole, a process that has accelerated rapidly by the use of modern AI tools. There is justifiable concern on the part of many of putting the judicial system on auto-pilot and the anomalies this creates, he says.

“One of the models makes a prediction about whether you are going to be rearrested or not. If you look at the defendants whom the model gets wrong, you see significant racial disparities. Black defendants are twice as likely to be misclassified as white defendants to be at higher risk while the opposite is the case for white defendants.”

Conviction predictor

Another fundamental problem here, he adds, is that all the model knows about is arrest and conviction of reported and solved crime cases, not overall levels of crime and not wrongful arrests and convictions. It is not a recidivism predictor, therefore, it is an arrest and conviction predictor.

As he eloquently puts it in the book: “We must take great care not to ignore the things that are not easily quantified or do not easily admit themselves into our models. The danger, paraphrasing Hannah Avendt, is not so much that the models are false, but that they might become true.”

There’s a concern, he says, that when we speak about alignment we’re looking narrowly at the interests of the people who design and develop AI systems. Business will prioritise economic efficiency, but there are other stakeholders, such as government and regulatory authorities or third parties such as consumers, that are affected by the models that are developed, who may have different and sometimes conflicting interests.

There is no doubt that AI systems are becoming more pervasive as the cost of deploying and distributing technology decreases. Increasing computational power is double-edged, he reminds us. You can go more efficiently in the wrong direction.

Nature, he notes, though shaped in many ways by humans, never ceases to find ways to buck the systems we attempt to impose on it. That element of humility again is necessary. “One of the most dangerous things that we can do in machine learning is to find a model that is reasonably good, declare victory and henceforth confuse the map with the territory.”

Medicine is one area he believes where AI and machine learning could make a huge contribution to the world. If you can build a model that can emulate the world’s best cancer diagnosticians – who are unevenly distributed around the globe – then you have the possibility to create a level playing field in terms of access to that level of critical healthcare, he says.

“If someone with a basic camera on their phone can take a picture of a mole on their skin they are worried about, forward it and get an expert response, for example, that’s the kind of thing that feels hopeful.”

Human brain

Artificial General Intelligence (AGI), machine learning that comes close to replicating the nuanced nature of the human brain, has long been seen as the “holy grail” in the development of AI. As its possibility looms into view, a raft of ethical and safety issues have come to the surface. Could the machines ultimately become more intelligent and powerful than their human creators, for example, and will they always be a force for good?

Christian is hopeful, however. Concern about these issues has resulted in a groundswell of activity. “Money is being raised, taboos are being broken, marginal issues are becoming central, institutions are taking root and, most importantly, a thoughtful engaged community is developing and getting to work. The fire alarms have been pulled and the first responders are on the scene.”

The Alignment Problem, how can Machines Learn Human Values? by Brian Christian is published by Atlantic Books.