This 5-Fingered Robot Hand Learns to Get a Grip on its Own

Spread the love
This five-fingered robot hand developed by University of Washington computer science and engineering researchers can learn how to perform dexterous manipulation -- like spinning a tube full of coffee beans -- on its own, rather than having humans program its actions. Credit: University of Washington

This five-fingered robot hand developed by University of Washington computer science and engineering researchers can learn how to perform dexterous manipulation — like spinning a tube full of coffee beans — on its own, rather than having humans program its actions. Credit: University of Washington

Computer science experts and engineering researchers have built a robot hand that can not only perform dexterous manipulation but also learn from its own experience. Robots today can perform space missions, solve a Rubik’s cube, sort hospital medication and even make pancakes. But most can’t manage the simple act of grasping a pencil and spinning it around to get a solid grip.

Intricate tasks that require dexterous in-hand manipulation – rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands – have proved notoriously difficult for robots. Vikash Kumar, a UW doctoral student said: “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”

UW computer science and engineering doctoral student Vikash Kumar custom built this robot hand, which has 40 tendons, 24 joints and more than 130 sensors.

UW computer science and engineering doctoral student Vikash Kumar custom built this robot hand, which has 40 tendons, 24 joints and more than 130 sensors.University of Washington

By contrast, the UW team spent years custom building one of the most highly capable 5-fingered robot hands in the world. Then they developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the hardware and real-world tasks like rotating an elongated object. With each attempt, the robot hand gets progressively more adept at spinning the tube, thanks to machine learning algorithms that help it model both the basic physics involved and plan which actions it should take.

“Usually people look at a motion and try to determine what exactly needs to happen – the pinky needs to move that way, so we’ll put some rules in and try it and if something doesn’t work, oh the middle finger moved too much and the pen tilted, so we’ll try another rule,” said and lab director Emo Todorov, UW associate professor of computer science and engineering and of applied mathematics.

The research team from the UW Movement Control Laboratory includes (left to right) Emo Todorov, associate professor of computer science and engineering and of applied mathematics; Vikash Kumar, doctoral student in computer science and engineering; and Sergey Levine, assistant professor of computer science and engineering.

The research team from the UW Movement Control Laboratory includes (left to right) Emo Todorov, associate professor of computer science and engineering and of applied mathematics; Vikash Kumar, doctoral student in computer science and engineering; and Sergey Levine, assistant professor of computer science and engineering.University of Washington

Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength responsiveness and flexibility to mimic basic behaviors of a human hand.

The UW’s dexterous robot hand uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand. It is too expensive ($300K) for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies. The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviors and plan movements to achieve different outcomes – like typing on a keyboard or dropping and catching a stick – in simulation.

Most recently, they transferred the models to work on the actual five-fingered hand hardware, which never proves to be exactly the same as a simulated scenario. As the robot hand performs different tasks, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models. So far, local learning is with the hardware system – ie the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning – which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn’t encountered before. http://www.washington.edu/news/2016/05/09/this-five-fingered-robot-hand-learns-to-get-a-grip-on-its-own/