A robot that can perceive its body has self-awareness, say researchers
The team claims to have given a robot awareness of its position in physical space, but others are skeptical
July 13, 2022
A robot can create a model of itself to plan how to move and achieve a goal – something its developers say makes it self-aware, although others disagree.
Each robot is trained in some way to perform a task, often under simulation. Seeing what to do, robots can then mimic the task. But they do so without thinking, perhaps relying on sensors to try to reduce the risk of collision, rather than an understanding of why they’re performing the task or a genuine awareness of where they are in the physical space. This means they often make mistakes – hitting an obstacle with their arm, for example – that humans wouldn’t because they would compensate for the changes.
“It’s a very essential ability of humans that we normally take for granted,” says Boyuan Chen at Duke University, North Carolina.
“I’ve been working for some time to try to make machines understand what they are, not by being programmed to assemble a car or a vacuum cleaner, but to think for themselves”, explains the co-author. Hod Lipson at Columbia University, New York.
Lipson, Chen and their colleagues attempted to do this by placing a robot arm in a lab where it was surrounded by four cameras at ground level and one camera above. The cameras sent video images back to a deep neural network, a form of AI, connected to the robot which monitored its movements in space.
For 3 hours, the robot squirmed randomly and the neural network received information about the mechanical movement of the arm and observed its reaction to seeing where it moved in space. This generated 7888 data points – and the team generated another 10,000 data points through a simulation of the robot in a virtual version of its environment. To test how well the AI had learned to predict the location of the robot arm in space, it generated a cloud-like graph to show where it “thought” the arm should be found when it was moving. It was accurate to within 1%, meaning that if the workspace was 1 meter wide, the system correctly estimated its position to within 1 centimeter.
If the neural network is considered part of the robot itself, it suggests that the robot has the ability to determine where it is physically at all times.
“For me, this is the first time in the history of robotics that a robot has been able to create a mental model of itself,” Lipson says. “It’s a small step, but it’s a sign of things to come.”
In their research paper, the researchers describe their robotic system as being “3D self-aware” when it comes to planning an action. Lipson thinks a self-aware robot in a more general, human sense is 20 to 30 years away. Chen says mindfulness of self will take scientists a long time. “I wouldn’t say that the robot is already [fully] self-aware,” he says.
Others are more cautious — and potentially skeptical — of the paper’s claims about even 3D self-awareness. “It is possible that further research will lead to useful applications based on this method, but not self-awareness,” says Andrew Hundt at the Georgia Institute of Technology. “The computer simply associates form and motion patterns that come in the form of a moving robot arm.”
David Cameron at the University of Sheffield, UK, claims that following a specified path to reach a goal is easily achieved by robots without self-perception. “The robot modeling its path to the goal is a key first step in creating something resembling self-perception,” he adds.
However, it is uncertain from the information published so far by Lipson, Chen and their colleagues whether this self-perception would continue if the neural network-equipped robot were moved to new locations and had to constantly “learn “to adjust his movement to compensate. for new obstacles. “A robot continuously modeling itself, along with movement, would be the next big step towards a robot with self-perception,” he says.
Journal reference: Scientific robotics, DOI: 10.1126/scirobotics.abn1944