When applying Imitation Learning to robotic manipulation, questions arise such as; what needs to be learned and how should data be collected. As systems become more involved with multiple deep neural networks, these questions can at times become overwhelming. At the Deep Learning for Robotics Summit in San Francisco this June 28 - 29, Michael Laskey, AI and Robotics Researcher at UC Berkeley will be presenting his current work and will discuss the challenges of training various learned components and how to assure errors do not compound in sequential tasks.
Michael’s current research focuses on how to leverage deep learning to enable robots to manipulate objects in unstructured environments. While deep learning has made big breakthroughs in natural language and vision, there are still a lot of challenges in applying it to robotics. One of the key questions that Michael’s research focuses on is how to sample training data to efficiently obtain robust manipulation policies.
Applying imitation learning to robotics is interesting from the perspective robotic manipulation as ‘we normally have to deal with high-dimensional images of the world and don’t have exact models of how the world behaves. The cool thing about imitation learning is that it can operate under these assumptions given a supervisor who can perform the task. 'This then means that for a wide range of manipulation tasks, it can offer a solution, whereas in traditional robotics each task would have been solved with a different algorithm. ‘Imitation learning though is still not easy to always get work, but the hope is that research in this area can make big breakthroughs.’
Michael explained that ‘a lot of the challenges in robot learning revolve around data collection. Unlike like labelling images for classification, on a robot, the data needs to be collected through physical interactions with the world. Usually, this means for complex tasks, such as retrieving objects from a cupboard, close supervision of the robot is needed to ensure it doesn’t create significant errors. This supervision can become quite tedious. So the big challenge is to how to use fewer demonstrations and get the same performance.’
Currently, there are many applications for teaching robots how to better manipulate the world, and recently, Michael’s work taught a robot to pick up objects in the home and put them into targeted bins. Whilst this may seem trivial on the surface when robots can perform home tasks, such as bed making or decluttering, the long-term impact is that living spaces can become cleaner and healthier with less effort, this can hopefully improve quality of life for senior citizens and disabled.
‘Robots that can actually provide more assistance and care to elderly and disabled people, is very exciting to me. However, this technology won’t be around for quite some time. More immediate industry impact will be in areas like warehouse order fulfillment. The ability of a robot to pick up unseen objects has improved dramatically in the past five years and we will probably see this getting implemented on a large scale within the next five years.’