When thought is a bad idea

Can human intelligence in all its glory and with all its foibles be replicated on a computer? Some scientists say yes and look…

Can human intelligence in all its glory and with all its foibles be replicated on a computer? Some scientists say yes and look forward to building a future of limitless possibility, while others are more sceptical.

The proponents of strong Artificial Intelligence (AI) are confident that, one day, the technology will be available to fully replicate human intelligence. They regard the intricate workings of the brain as an algorithm, which consequently can and will be run on a computer.

While such a goal may reside in the realm of fiction for many, those who remain faithful to strong AI are at least cautious about their deadline, accepting that it will more than likely be their successors in the coming centuries who will reap the reward of their belief. This reward, as described in John L. Casti's Paradigms Regained, would be "an artificial brain based in silicon, metal, and plastic to simulate the `wet' brain we all carry around inside our heads". In practice, this may seem absurd, but in principle it is not.

The modus operandi of strong AI can be "top down" or "bottom up", depending on your scientific convictions. The top-down approach sees human thought processes as the result of rule-based symbol processing in the brain. In other words, the brain actually follows rules to arrive at each of the myriad cognitive states we call thinking.

READ MORE

Concentrating on cognitive states whilst disregarding the actual architecture of the brain is seen as a shortcoming by some critics. Lending itself to what are termed "automatic formal systems", such as chess or Scrabble, in which symbols are manipulated according to set rules, the weakness in this approach is highlighted when it comes to natural language processing, which is intrinsically bound with common sense.

This weakness is captured in a simple example given by Casti from an early language translation program, which was asked to render the phrase "the spirit is strong, but the flesh is weak" into Russian. The subsequent English translation fell somewhat short: "The vodka was good, but the meat was rotten."

As Casti points out, when we look at an object, what we really see is a function and a context. So, when we look at a book, we don't think of it as bound sheets of paper covered in characters; instead, common sense makes us wonder what the subject-matter is and whether we'd be interested in it.

Notwithstanding, the quest to instil common sense in machines has progressed over the years in the top-down camp and improvements have ensued. For example, a program was tested with the following story:

"John wanted money. He got a gun and walked into a liquor store. He told the owner he wanted money. The owner gave John the money and John left."

The program successfully inferred that the gun was used to threaten the store-owner and that a robbery had taken place.

The top-downers have amassed some pretty feathers in their cap. Deep Blue II dethroned the world chess champion, Garry Kasparov, BACON can discover laws of physics from observed data, and Cyc is a database that is moving closer to the ultimate goal of being able to read and learn for itself.

The bottom-up school is more concerned with the architecture of the brain, seeing the best way forward in constructing a computer or system in which physical architecture mimics that of the human brain. The hope is that these machines will develop intelligence the way a human child does, by observing its surroundings and then using the observations as building blocks for sustained learning.

The bottom-up school falls into two camps, one focusing on software emulation of the brain and the other focusing on hardware mimicry of neural circuits.

While the software group creates software implementations that imitate a rudimentary brain, the hardware group actually constructs physical devices that model the brain's neuronal structure.

Most commonly manifested in neural networks, the work of the bottom-up school has had success in medical diagnosis, stock markets, handwriting recognition, and identification of signals in the presence of noise. In fact, commercially developed neural networks are now used routinely in the financial, scientific and industrial worlds, a testimony to their success in problem-solving, albeit in limited areas.

Considering that the electronic circuits in a desktop computer are over a million times faster than the firing of the neurons in a human brain, with far superior timing and accuracy, it is difficult not to share the ambitious hopes and dreams of the proponents of strong AI. Notwithstanding the brain's billion-year start over the computer, the exponential growth in computing power is lessening the gap between the two.

Building on the notions of automata found in Victorian fiction, intelligent computers and robots regularly starred in films throughout the 20th century. In 1950, these notions began to move from the arena of fantasy into that of fact, with the publication of Alan Turing's `Computing Machinery and Intelligence' in the philosophical journal Mind. This paper referred to what is now known as the Turing test, establishing whether a computer could reasonably be said to think.

Briefly, the test involves a computer and a human volunteer, both of whom are hidden from an interrogator. Questions and answers are transmitted in an impersonal fashion - for example, using a keyboard and display screen - and the interrogator has to decide from the answers which responses have been made by the human, which by the machine. If the interrogator is unable to distinguish the computer from the human volunteer, then the computer is deemed to have passed the test.

Turing thought that any machine capable of winning a game like this could be regarded as actually having thoughts, while critics such as the philosopher John Searle insisted that the mere computational manipulation of symbols does not equate with thinking, the carrying out of a successful algorithm not in itself requiring understanding.

One of the foibles of human intelligence is free will. But if the top-downers and the bottomuppers realise their ambitions, where does this leave the divide between animate and - albeit complex - inanimate systems?

Douglas Hofstadter in Metamagical Themas wonders whether we will be able to recognise AI systems deserving of our respect when they arrive. "When does a system or organism have the right to call itself `I', and to be called `you' by us?" he asks.

Roger Penrose, in The Emperor's New Mind, adds a moral dimension to the discussion. If indeed, he says, a device is a thinking, feeling, sensitive, understanding and conscious being, then our purchasing and subsequent operating of it to satisfy our needs, regardless of its own sensibilities, would be akin to maltreating a slave. But as Alan J. Perlis of Yale has said: "If you can imagine a society in which the computer-robot is the only menial, you can imagine anything."

Doesn't it seem like reverse psychology or retrograde science that the computer, which was invented by the human mind, should now be expected to become the human mind itself? Fears of superior robotic progeny apart, I believe that strong AI researchers should continue with their quest and, by the time their successors crack the nut, future generations will be more able to cope with the disadvantages while assimilating the benefits into their lives.

Berni Dwan is a freelance technology writer.