Latent Structure in Deep Robotic Learning
Traditionally, deep reinforcement learning has focused on learning one particular skill in isolation and from scratch. This often leads to repeated efforts of learning the right representation for each skill individually, while it is likely that such representation could be shared between different skills. In contrast, there is some evidence that humans reuse previously learned skills efficiently to learn new ones, e.g. by sequencing or interpolating between them. In this talk, I will demonstrate how one could discover latent structure when learning multiple skills concurrently. In particular, I will present a first step towards learning robot skill embeddings that enable reusing previously acquired skills. I will show how one can use these ideas for multi-task reinforcement learning, sim-to-real transfer and imitation learning.
Karol Hausman is a Research Scientist at Google Brain in Mountain View, California working on robotics and machine learning. He is interested in enabling robots to autonomously acquire general-purpose skills with minimal supervision in real-world environments. His current research investigates interactive perception, deep reinforcement learning and imitation learning and their applications to robotics. He has evaluated his work on many different platforms including quadrotors, humanoid robots and robotic arms. He received his PhD in CS from the University of Southern California in 2018, MS from the Technical University Munich in 2013 and MEng from the Warsaw University of Technology in 2012. During his PhD, he did a number of internships at Bosch Research Center (2013 and 2014), NASA JPL (2015), Qualcomm Research (2016) and Google DeepMind (2017)