A very serious issue lies behind the ‘killer robots’ mockery

Developing autonomous weapons is not just the stuff of Hollywood summer blockbusters

This  mock “killer robot”  in central London during the launch of a campaign calling  for the ban of lethal robot weapons attracted plenty of attention. But does such an image do the cause much good? Photograph: Getty Images
This mock “killer robot” in central London during the launch of a campaign calling for the ban of lethal robot weapons attracted plenty of attention. But does such an image do the cause much good? Photograph: Getty Images

Do you take killer robots seriously?

If not, maybe you should. That’s the argument put forth this week by over 2,000 technology, science and artificial intelligence experts, and some 10,000 other supporters, alarmed at the prospect of, well, killer robots.

Translating that into terms that sound a little more serious, the open letter, published by the Future of Life Institute, argues that for the sake of humanity, the world's governments should agree to ban the development of autonomous weapons.

Signatories include astrophysicist Stephen Hawking, Apple co-founder Steve Wozniak, Tesla founder and chief executive Elon Musk, and Google's research director Peter Norvig. The presence of Musk and Hawking, both already known to be concerned about artificial intelligence, guaranteed the letter got media attention.

READ MORE

Autonomous weapons are defined in the letter as those which “select and engage targets without human intervention”. They would include quadcopters that might hunt down targets and people based on pre-determined criteria, the organisation states, but would not include remotely piloted and controlled vehicles like drones.

Once one nation starts developing them, everyone will be at it, the letter argues, ensuring robotic weapons “become the Kalashnikovs of tomorrow”, likely to be used by rogue states, terrorists and criminals in utterly terrifying ways.

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,” the letter says.

While using robot combatants for battle rather than people would obviously save lives, the letter proposes that artificial intelligence could be used in other, more positive ways to “make battlefields safer for humans, especially civilians, without creating new tools for killing people”.

While it doesn’t say so outright, the letter’s message is definitely “don’t laugh; we’re serious”. Developing autonomous weapons that hunt down and kill is not just the stuff of Hollywood summer blockbusters but “feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms”.

Cartoonish treatment

Yet the temptation is always to pose this threat in a cartoonish way. The media eats this stuff up and spits it back out to readers in CGI-movie terms, with a barely suppressed grin. And readers are drawn to the headlines in pretty much the same spirit.

That's certainly why a letter signed primarily by lots of research and computing geeks got widespread attention and plenty of Facebook and Twitter action.

But it isn't helped by the fact the academics themselves sometimes buy into this. There's a global organisation called the Campaign to Stop Killer Robots with the same, absolutely laudatory goals espoused in the open letter, and many of its members are signatories to it.

The Killer Robots campaign comprises numerous non-governmental organisations (the eminent Human Rights Watch is on the steering committee) and is pushing for a global treaty banning autonomous weapons. But while using the term “killer robots” in your organisation’s name will certainly get more attention than “autonomous weapons”, I’m not sure it does the overall cause much good.

Seems to me it just locks a very serious issue into the public mind as a kind of jovial silly season topic (at best) or – worse – create actual enthusiasm in some people for a future that would seem to bring their favourite movies and games to life.

The temptation for headline writers is to go the Hollywood route. Hence C/Net opted for “Terminator: Meet the organisation trying to stop killer robots” when it wrote about the campaign group. And referring to the letter this week, we got “Ban Killer Robots Before They Take Over, Stephen Hawking & Elon Musk Say” as a headline from livescience.com.

There's that "oh, come on now" undertone to both. And the Livescience headline isn't really what the letter is about at all, reminding that even for a more serious science publication, the sensationalist approach is tempting. The letter doesn't warn swarms of autonomous robot overlords will subdue humanity and "take over" – it argues people or nations will use autonomous weapons for questionable ends, which is quite different.

That said, Musk and Hawking have in the past also expressed concerns artificial intelligence has the potential to become too autonomous and entities – robots – could be created that develop beyond human control. I’ve a sinking feeling nations are going to go ahead and research and develop these things on the assumption that if they don’t, others will. Most likely, like chemical and nuclear weapons, they will be developed, sometimes secretly, and stockpiled.

The prospect is, as the letter signatories argue, very worrying. The topic deserves serious discussion that moves beyond Terminator and Matrix references, but the challenge will be to have the issue taken seriously in an age of ubiquitous CGI, gaming consoles and big screen science fiction.