Supersizing and Empowering Robot Learning
Most of the current approaches in robotics either learn from small amounts of data (a few hundred examples) or use simulation to scale up learning. However, both simulation and real-world lab data have a huge issue of missing diversity! In this talk, I will focus how we can scale up and empower robot learning by focusing on one term: DIVERSITY.
First, I will talk about how we can diversify environments by moving physical robots from lab to homes. We have built a low-cost 3K USD mobile manipulator that is used to collect data from 10 different homes to show the power of data collected in diverse environments. Next, I will talk about how we can diversify the tasks. Specifically, I will introduce our new dataset which has around 10K Kinesthetic trajectories for different tasks. Finally, I will talk about diversification of hardware. Current learning algorithms are hardware-specific and hence do not generalize to new hardware. I will talk about how we can learn policies that take hardware properties as input and predict the actions.
Abhinav Gupta is a Research Manager at Facebook AI Research (FAIR) and Associate Professor at the Robotics Institute, Carnegie Mellon University. Abhinav’s research focuses on scaling up learning by building self-supervised, lifelong and interactive learning systems. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense and representation for actions in robots. Abhinav is a recipient of several awards including ONR Young Investigator Award, PAMI Young Research Award, Sloan Research Fellowship, Okawa Foundation Grant, Bosch Young Faculty Fellowship, YPO Fellowship, IJCAI Early Career Spotlight, ICRA Best Student Paper award, and the ECCV Best Paper Runner-up Award. His research has also been featured in Newsweek, BBC, Wall Street Journal, Wired and Slashdot.