Ahead of his time, Turing's pardon is now long overdue

THE GREAT British mathematician, second World War codebreaker and computer scientist Alan Turing has become one of the heroes…

THE GREAT British mathematician, second World War codebreaker and computer scientist Alan Turing has become one of the heroes of modern computing for his challenging intellect, his intriguing insights into computing problems and artificial intelligence, and of course, his critical role at Britain’s Bletchley Park in cracking the devilishly complicated Enigma Machine code, which the Nazis used to encrypt messages during the war.

Ultimately, he was treated atrociously by the very government he served so well – despite the fact that the cracking of the Enigma code in particular is credited with shortening the length of the war and saving many millions of lives.

He was prosecuted in 1952 under Britain’s old anti-homosexuality laws, and given the choice of prison or the humility of chemical castration with female hormone injections, choosing – if this could even be viewed as a “choice” – the latter. Due to his sexual preference, he was stripped of his high-security clearances and could no longer do the high-level intelligence and mathematical work he loved.

He died in 1954 of cyanide poisoning – officially ruled a suicide, although some, including his mother, believed his death was accidental.

READ MORE

This year marks the centenary of his birth, and has seen many global initiatives in his honour. One in Ireland, which provided a multilayered look at Turing’s legacy, was a well attended session at the Euroscience Open Forum last week at which four speakers weighed his ideas and influence.

UCD philosophy professor Dermot Moran considered whether his famous Turing Test of artificial intelligence could be accepted as a mark of true intelligence; Oxford mathematician (and frequent BBC science presenter) Prof Marcus du Sautoy looked at some mathematical influences behind Turing’s view of computation, UCD cognitive science professor Mark Keane examined how human activities might be viewed as different forms of computation; and IBM researcher Freddy Lecue delved into the development of artificial intelligence since Turing.

It was an exhilarating afternoon. I found it particularly interesting to hear a philosopher’s perspective on Turing’s ideas around artificial intelligence and the Turing Test, which Turing envisioned as a gauge of machine intelligence.

In the test, proposed in his paper Computing Machinery and Intelligence, a person would have a conversation at the same time with a human and a computer. They would not be able to see either, and hence would not know which was which. If the human could not distinguish between the other human and machine, Turing proposed that this would be a mark of whether a machine could think.

Moran disagreed with the basic assumption of the test, that “imitation of intelligence is intelligence”. But he felt that Turing’s ideas were extremely useful in prompting consideration of questions about intelligence and consciousness.

Keane posited that we live in a world where many human activities can be cast as various forms of computation, giving us a metaphor that helps us to understand other areas of human endeavour. Turing has helped us build a conceptual framework in which we can consider human activity and intelligence.

Lecue brought the audience through an interesting historical survey of the development of artificial intelligence. Turing himself actually conceived a chess-playing program but it couldn’t be tested on an actual machine as, at the time, no machine existed with the ability to do that kind of deep analysis.

By 1979, a computer could play, and win, at backgammon. By 1997, IBM’s Deep Blue beat a world champion at chess, and, in 2007, the Polaris computer had the frustrating experience of winning, losing and drawing in a set of poker games.

But this, Lecue noted, is “weak AI”. Only with the arrival of Watson, IBM’s supercomputer that grabbed headlines for beating two champions at the quiz show Jeopardy! in the US, have we moved into the era of “strong AI” – but, he emphasised, “not thinking”.

Du Sautoy broadly agreed, taking the audience through some of the maths and logic behind artificial intelligence, and his own encounters with some classic experiments and new machines in an effort to understand where the dividing lines are between human intelligence and consciousness, and machine intelligence.

Even a computing based service like Google may seem magical – “It feels like there’s little Google gnomes who seem to know what you’re looking for.” But he felt it was a group of diminutive robots that have learned to communicate with each other – developing a fragmental language along the way that the robots understand and the humans do not – that really begins to move beyond Turing’s premise of thinking, to an area that no doubt will engage the philosophers too.

It’s a tribute to Turing that his paper and its ideas spark debate and excite our interest like this, over half a century later.

The sheer enjoyment in the session was a reminder that, surely, nearly 60 years after his untimely death, Turing deserves a long overdue and important gesture: a posthumous pardon for his conviction of a “crime” that is not even now a crime.

It would in no way repay our great debt to him, but at least it would formally erase an unwarranted blemish on his character that today functions only as a shameful reminder of society’s cruelty.