Rapid improvements in artificial intelligence (AI) could mean humans would not need to be “in the loop” to check the accuracy of the technology’s work within two years, a UK academic has said.
Speaking at an event in Dublin, Dr Alastair Moore, a University College London academic who specialises in AI, said the recent pace of developments in the technology had been “nuts”.
AI models, such as text generator ChatGPT, had seen improvements in how well they worked of 50 to 70 per cent in a period of six months, he said.
“We don’t understand how it will affect the larger economy ... No one understands any of this stuff,” he said.
“At the moment, anything that the model does is going to have to be tightly wedded into a human process, with a human in the loop,” Dr Moore said.
However, he said at the rate the technology was improving that might only be necessary for “maybe 24 months”.
Previously the assumption was society would most likely have developed machines with “human-level intelligence” by 2050, not 2023, he said.
Any predictions by experts on where AI would be in the future should be taken with a “pinch of salt”, he said.
“You’re living in a very strange part of human history ... I’m still trying to get my head around it.”
The level of improvement in AI tech in a matter of months had taken “everyone by surprise”, he said.
“You now have machines that are going to enter the economy with an IQ of about 100-120.”
While work produced by AI was not at the level of human experts in a field, it was “mostly right, most of the time”. It would give companies “a whole load of capacity and capabilities” that they would not have had before, said Dr Moore.
[ AI having a positive impact on job creation and workplaces, conference toldOpens in new window ]
Michael Doran, a senior executive with legal consultants Johnson Hana, said work that was currently being done by trainee solicitors or junior members of staff in law firms could soon be done by AI.
Companies using AI to carry out tasks previously done by staff would have to weigh up what was an acceptable level of risk for “a degree of mistakes” to be made as a result, he said.
The tech industry was “one of the least diverse industries in the world”, which meant AI would have inherent biases “built into the models” behind the technology, he said. As a result, it would need a human to “monitor” the work of AI to ensure quality control.
The pair were speaking on a panel at the launch of the Wild Atlantic Law festival, which is set to take place in Ennistymon, Co Clare, in May 2024.