Transforming dextrous robotic grasping
This video is a part of the FCAI success stories series. In the video series, we explain why fundamental research in AI is needed, and how research results create solutions to the needs of people, society and companies.
FCAI researchers have recently developed two fast AI methods for grasping objects with multi-finger robotic hands. These methods bring us closer to the practical use of multi-finger robotic hands, especially in human-centric environments.
Multi-finger robotic hands are essential to many of the tasks we want robots to do. Currently, most robots are grasping objects with simple parallel-jaw grippers that mimic how humans grasp objects between the thumb and the index finger.
Grasping with more human-like multi-finger robotic hands is considerably more challenging as the robot then has to control more fingers. Because of this, most methods that address multi-finger grasping are extremely slow, and it takes close to minutes to generate a single grasp.
New methods for grasping objects
FCAI researchers have recently developed two fast AI methods called Multi-FinGAN and DDGC for grasping objects with multi-finger robotic hands.
Multi-FinGAN is a fast generative multi-finger grasp sampling method. It synthesizes high-quality grasps in about a second on individual objects directly from camera images.
“We experimentally validate and benchmark Multi-FinGAN against standard grasp-sampling methods in simulation and on a real Franka Emika Panda robot. All experimental results using our method show consistent improvements in grasp quality and grasp success rate. Also, the new approach is up to 20-30 times faster than the baseline”, says Jens Lundell from the Intelligent Robotics group.
DDGC is, similar to Multi-FinGAN, a generative multi-finger grasp sampling method but with one crucial extension: it can grasp objects in clutter.
“Grasping in clutter is notoriously difficult because the robot now has to plan a successful grasp of one object while avoiding all other objects. DDGC achieves this by encoding information about the complete scene and the object to grasp. We experimentally show that DDGC synthesizes higher-quality grasps and removes more clutter than several baselines, including Multi-FinGAN. Moreover, DDGC is also very fast, planning multiple grasps in less than a second”, says Jens Lundell.
Synthetic data for training robots
Like most other AI methods, Multi-FinGAN and DDGC require a lot of training data to work well. However, collecting such data with real robots is highly time-consuming and wears the robotic hardware. Intelligent Robotics researchers have circumvented this issue by using completely synthetic data collected from simulation.
“Now, we can easily generate as much training data as we possibly want. Most importantly, we have shown that our methods, despite only trained on synthetic data, generalize well to real-world robotic grasping”, says Lundell.
“We believe that our methods bring us closer to the practical use of multi-finger robotic hands, especially in human-centric environments.”
The code for Multi-FinGAN and DDGC are publicly available. Feel free to use them: https://github.com/aalto-intelligent-robotics/Multi-FinGAN
https://github.com/aalto-intelligent-robotics/DDGC
The Intelligent Robotics group
The Intelligent Robotics group, lead by prof. Ville Kyrki, is part of FCAI. The group focuses on making robots cope with the complexity of real-world unstructured environments, ranging from homes to public spaces and the traffic environment.
The research combines robotics, machine learning, computer vision, control engineering, and human-robot interaction to develop methods that allow robots to operate in environments built for humans. Through previous and current European projects, they have an extensive collaboration network with partners around Europe.
Pre-print version of Multi-FinGAN: https://arxiv.org/abs/2012.09696
Pre-print version of DDGC: https://arxiv.org/abs/2103.04783