Robot Toddler Learns to Stand by “Imagining” How to Do It
Instead of being programmed, a robot uses brain-inspired algorithms to “imagine” doing tasks before trying them in the real world.
Why It Matters
Darwin lives in the lab of Pieter Abbeel, an associate professor at the University of California, Berkeley. When I saw the robot a few weeks ago, it was suspended from a camera tripod by a piece of rope, looking a bit tragic. A little while earlier, Darwin had been wriggling around on the end of the rope, trying to work out how best to move its limbs in order to stand up without falling over.
Darwin’s motions are controlled by several simulated neural networks—algorithms that mimic the way learning happens in a biological brain as the connections between neurons strengthen and weaken over time in response to input. The approach makes use of very complex neural networks, which are known as deep-learning networks, which have many layers of simulated neurons.
For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. The second network is required because when the first network tries, for example, to move a leg, the friction experienced at the point of contact with the ground may throw it off completely, causing the robot to fall.
Darwin the robot performs various actions after virtual and real-world learning.
“It practices in simulation for about an hour,” says Igor Mordatch, a postdoctoral researcher at UC Berkeley who carried out the study. “Then at runtime it’s learning on the fly how not to slip.”
Abbeel’s group has previously shown how deep learning can enable a robot to perform a task, such as passing a toy building block through a shaped hole, through a process of trial and error. The new approach is important because it may not always be possible for a robot to indulge in an extensive period of testing. And simulations lack the complexities found in the real world, conditions that with robots can cascade into a catastrophic failure.
Click on the below link will show you the actual web for video explanation.
“We’re trying to be able to deal with more variability,” says Abbeel. “Just even a little variability beyond what it was designed for makes it really hard to make it work.”
The new technique could prove useful for any robot working in all sorts of real environments, but it might prove especially useful for more graceful legged locomotion. The current approach is to design an algorithm that takes into account the dynamics of a process such as walking or running (see “The Robots Walking This Way”). But such models can struggle to deal with variation in the real world, as many of the humanoid robots involved in the DARPA Robotics Challenge demonstrated by falling over when walking on sand, or when unbalancing themselves by reaching out to grasp something (see “Why Robots, and Humans, Struggled with DARPA’s Challenge”). “It was a bit of a reality check,” Abbeel says. “That’s what happens in the real world.”
Dieter Fox, a professor in the computer science and engineering department at the University of Washington who specializes in robot perception and control, says neural network learning has huge potential in robotics. “I’m very excited about this whole research direction,” Fox says. “The problem is always if you want to act in the real world. Models are imperfect. Where machine learning, and especially deep learning comes in, is learning from the real-world interactions of the system.”
No comments:
Post a Comment