AI helps chatbots to get better at chat, which helps us all

Next stage of development will be chatbots designed to be proactive, experts say

The Adapt Centre’s conversational robot heads. These are used to research artificial conversational agents using AI for a number of settings including healthcare and customer service. Photograph: Paul Sharp/Sharppix
The Adapt Centre’s conversational robot heads. These are used to research artificial conversational agents using AI for a number of settings including healthcare and customer service. Photograph: Paul Sharp/Sharppix

Anyone who has “talked” to an online chatbot might have ended up feeling frustrated they weren’t listened to or helped. Yet, chatbots have upped their communications game with advances in artificial intelligence (AI) and are now set to improve our lives in many ways.

Conversational AI is the technology that supports chatbots – which are also known as virtual agents. Their main role remains to respond to online queries from FAQ lists and they are designed to mimic human speech to put users at ease.

Benjamin Cowan, co-founder and co-director of HCI@UCD, a research group investigating human-computer interactions, based at University College Dublin, believes it has been a mistake to design chatbots that give users the impression they are human.

“I think that is wrong-headed,” Cowan says. “We know what computers are good at, and we know what humans are good at. We can make human-computer interactions much easier when we remove the desire to make a chatbot seem human in these dialogues.

READ MORE

“They are, to some extent, trying to fool people by trying to present chatbots as human so that people feel more natural about interacting with them. Much of our work indicates that this might be a bad thing to do because it makes people assume that these systems are far more intelligent than they are.”

There is a danger, he says, that people will not want to use chatbots and devices using conversational AI if they feel they are being tricked into believing they are talking to a human.

Take the example of Google Duplex, which is billed as a completely automated system that can make calls for you, but with a natural-sounding human voice, rather than a robotic one. The idea was to help people book appointments and reservations.

“What happened was that someone might use Google Duplex to book a hair appointment, but the person at the other end of the call might not know they were dealing with a computer system,” says Cowan. “To me, it is completely disingenuous to do that because you must let people know what they are interacting with.”

Yet, despite the ethical issues, conservational AI now resides inside many successful products we have become familiar with such as Amazon virtual assistant Alexa, Apple's Siri and Google's Home.

Products currently using conversational AI are, however, all based on what researchers call weak AI. This is where a narrow, specific range of tasks are performed for customers on demand, such as playing a piece of music, switching off a light or doing a Google search.

The long-term goal is to harness strong AI into chatbots and conversational AI. This represents a huge challenge, as strong AI works like human consciousness, where a system would be able to solve a broad range of tasks and problems, much like a human brain.

The challenge for conversational AI researchers now is to figure out the short and medium-term goals. Cowan says that existing systems are useful in situations such as driving where they have their hands and eyes busy and can use voice commands to control the systems.

The next step, Cowan says, will be the development of systems that don’t simply respond to commands, but are designed to be proactive and to interrupt users with information and updates as required.

“We are trying to figure out when is the right time to interrupt a busy user, and how should that be done?” says Cowan. “Should it be direct? Or done in a polite fashion? And when precisely should the interruption come? We think that’s where the future of these chatbot agents is going and we are doing major work on this.”

There are moves afoot to make conversational AI systems more intelligent and capable of debating and even arguing with users.

Project Debater

Project Debater, developed by IBM, including researchers in Ireland, is the first AI system that can debate with humans on complex topics. The company says its goal is to help people build persuasive arguments and make better, well-informed decisions.

“The system we are working on has different components,” says Yufang Hou, a researcher at IBM in Dublin, who worked on Project Debater. “One of the components is argument mining, where we look at the meaning of sentences related to a topic,” Hou says. “We get rid of duplication and extract high-quality arguments from a huge corpus, and the system learns as it goes along.”

“With Project Debater we have gone through a massive corpus of some 400 million articles,” says Hou, who is an expert in natural language processing. “No one can read so many articles about a specific topic. The system can help us to spot things in this massive corpus and to analyse and present the information.

“We believe our system can lead policymakers, for example, to make better, more informed decisions. The humans are still making the decisions, they are just making them with better information.

“Project Debater could, for example, help myself and my husband decide whether we should buy an electric car or not. The system can look through the massive corpus of articles and bring the pros and cons to us, and based on that we can make a decision, or perhaps change our minds.”

Paul Sweeney is co-founder of Webio, a Dublin-based company that uses conversational AI to understand customer intentions. He is also the co-founder of ConverCon, a conference that looks at the latest trends in conversational Al. He believes chatbots are set to bring huge efficiencies to many industries including healthcare.

The purchase of Nuance, a company providing conversational AI to the healthcare industry in April for $19.7 million by Microsoft, Sweeney believes, shows where the big tech firms believe things are heading. Chatbots are getting better at transcribing voice to text, he says, which is a great help for doctors when inputting their patient data.

“If you’ve accurate voice dictation, what you can do is input data three times as quickly,” Sweeney adds. “Your next point is you have to figure out what’s happening in that transcription. That’s where conversational AI comes in: it’s trying to figure out, is this a task? Is this a job? Is this a date? Is this an appointment? Is this a medicine? Is this a file request? But even if it was just simple things, like, run through all the doctor’s conversations, and just take out when’s the next appointment? That’s a burden off of them.”

Collaborators

These next generation of chatbots are set to be collaborators with their human users, Cowan insists. They will be designed for small groups in specific settings, or even individuals, he says. For example, systems might be designed to help people with dementia reminisce about the past, or to give updates on the patient’s emotional state.

Chatbots will be used in more and more settings, but there is no danger that human-chatbot interactions will replace human-human conversations any time soon, he says. “Social conversations I think will still be human-orientated for a long time to come.”

The technical challenges of truly befriending chatbots are huge, notes Cowan, and people may not even want this. “Huge amounts of social conversation data would be required to generate systems that would be anywhere close to a human-to-human social conversation. From our research, it’s not clear that people would even want systems that could replicate human social interactions.”

The goal for conversational AI designers, Cowan says, should be to design technology that people want to use and want to interact with. This means making things easier for reluctant chatbot users who should remember that when things go wrong, it’s not their fault.

“It’s not the users’ fault,” he underlines, “It’s the system’s fault: that’s the mantra that we have.”

Seán Duke

Seán Duke, a contributor to The Irish Times, is a science journalist