Michael Laskey

Building Robustness in Imitation Learning Systems

When applying Imitation Learning to robotic manipulation, questions arise such as; what needs to be learned and how should data be collected. As systems become more involved with multiple deep neural networks, these questions can at times become overwhelming. Part one of this talk discusses the challenges of training various learned components and how to assure errors do not compound in sequential tasks. I will present our algorithmic work on hyper-parameter free data collection protocols to ensure the robot learns to recover mistake by showing them small optimized errors during data collection. Part two focuses on applying this protocol to various robotic tasks in mobile manipulation; such as object retrieval from a shelf and autonomous bed-making. For each system, I will detail the overall architecture and discuss how high reliability was achieved by carefully sampling data to ensure robustness to the robot's mistakes.

Michael Laskey is Ph.D. Candidate in EECS at UC Berkeley, advised by Prof. Ken Goldberg in the AUTOLAB (Automation Sciences). Michael’s Ph.D. developed new algorithms for Deep Learning of robust robot control policies and examined how to reliably apply recent deep learning advances for scalable robotics learning in challenging unstructured environments. Michael received a B.S. in Electrical Engineering from the University of Michigan, Ann Arbor. His work has been nominated for multiple best paper awards at IEEE ICRA and CASE and has been featured in news outlets such as MIT Tech Review and Fast Company.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more