A robot dog walks through a forest, navigating dense undergrowth, fallen logs and ever-changing terrain. It may sound like cute kids’ toy, but in fact this latest output from Duke University’s research represents a significant advance in artificial general intelligence (AGI).
The technology fuses vision, vibration and touch to enable robots to sense what surrounds them in complex outdoor environments. It aims to emulate the manner in which we humans employ all our senses to take in and process millions of data points of information about our surroundings, allowing us to have a relaxing walk in the woods while our brains subconsciously work hard in the background.
The project, WildFusion, is described as a “novel approach for 3D scene reconstruction in unstructured, in-the-wild environments using multimodal implicit neural representations”.

According to Boyuan Chen, who leads the General Robotics Lab at Duke University, it opens a new chapter in robotic navigation and 3D mapping. “It helps robots to operate more confidently in unstructured, unpredictable environments like forests, disaster zones and off-road terrain,” he says. Search and rescue applications spring to mind immediately – such as in the recent devastating Texas floods.
READ MORE
In labs around the world, teams are working to develop AGI that can match or surpass human capabilities at cognitive tasks. In the dystopian future scenario, this is the tech fever dream where we make an autonomous intelligence that renders the human race obsolete.
This can sound far-fetched, and it can be difficult to understand how we would get to that point. A collaborative research project between the University of Surrey and the University of Hamburg is a good, grounded-in-reality example of where we are on that journey. Its use of new robotic simulations instead of early-stage human trials makes this research faster and more scalable.
The method, presented at the International Conference on Robotics and Automation in Atlanta in May, allows researchers to test whether a robot is paying attention to the right things without needing real-time human supervision.
Ultimately, the work in social robotics will make robots “better at understanding and responding to people”, according to Dr Di Fu, co-lead of the study and a lecturer in cognitive neuroscience at the University of Surrey.
It’s just a short leap of the imagination to a scenario where the machines don’t need us any more, once they start to do things without human supervision. But while the causes for concern range from the obvious job displacement to the risk of an existential threat to the human race, the potential benefits of AGI could allow for climate modelling, optimisation of energy usage or medical breakthroughs by generating and testing hypotheses much faster than humans can.
Yet despite all this excitement and anxiety around AGI, there’s still a lot of uncertainty about what it actually is – and how close we are to creating it.
“It depends on my mood,” says Mark Kelly, founder of AI Ireland, when asked how he feels about it. “If you listen to Geoffrey Hinton” – the computer scientist who has been described as the “godfather” of AI – “he’s pretty much saying it’s sentient and it’s ready to go, and it’s already tricking us.” Kelly himself is more positive. “I don’t think it’s sentient. I don’t think it’s ever going to create behaviour similar to ourselves or think like us whatsoever.”
Still, he acknowledges there are moments when it feels eerily close. “The o3 model is as close to AGI as I’ve ever seen in terms of how it talks to you, engages you, the reasoning it’s got,” he says, referencing a series of reasoning models developed by OpenAI. But scale and consistency are key, he says, citing the example of Anthropic’s Project Vend, where the company experimented with its Claude AI to run a vending machine in its office – and it went into spectacular meltdown.
The key takeaway: if AI can’t run a vending machine, it’s probably not coming for your job any time soon either.
What we are seeing already, however, is the rise of AI-native workers – those who use AI tools fluently to boost productivity and unlock new ways of working. “People are getting promoted being AI-native and skilled sooner than people who are not,” Kelly notes. “It could be as significant as someone working a three-day week versus someone who’s working a five-day week, because those people have got AI-enabled skills.”
For now, AGI may still be a vision of the future – but its shadows are already shaping the present.