Artificial intelligence “still isn’t very smart” so machines won’t be taking over any time soon, according a leading expert on cognitive robotics.
In spite of major advances, AI still does not have the common sense of a child to understand the everyday, Prof Murray Shanahan told the Schrodinger at 75 conference hosted by Trinity College in Dublin.
AI also fails to understand simple abstract concepts, such as a room or a place, and shows poor ability in trying out new actions or ideas – “It is not good at being innovative,” he said.
Prof Murray, who is based at the Imperial College London, said the big goal of his field was to build "general artificial intelligence" (GAI). This sort of intelligence was more human-like and could adapt to different situations and different tasks – his publications span AI, robotics, logic, dynamical systems, computational neuroscience and philosophy of the mind.
Elon Musk
People such as Elon Musk, of Tesla and SpaceX fame, and the late physicist Stephen Hawking have expressed deep concern that the cognitive ability of AI could endanger humanity, he noted.
Why build a GAI then? Current AI systems lacked true understanding of the world, Prof Murray said. “We have AI medical diagnosis [systems]that don’t know what a person is. It lacks real understanding of what is going on. That manifests itself in flaws and mistakes,” he said.
“Autonomous vehicles don’t really know what a car is. General intelligence goes hand in hand with true understanding and that is the motive for building general intelligence,” he explained.
For anyone scared by the concept of a machine that is more human-like, Murray said a number of conceptual breakthroughs would first be needed. “If you are worried about GAI safety, you don’t need to panic just yet,” he told the audience. “It is impossible to know when GAI might be achievable.”