Jianlan Luo

Chevron down

Speeding up Deep Reinforcement Learning on Robotics by Priors

Precise robotic manipulation skills are desirable in many industrial settings, reinforcement learning (RL) methods hold the promise of acquiring these skills autonomously. However, most of these methods fail or take a long time to learn in high-precision settings without heavily-engineered reward functions. However, human does not learn everything from scratch; incorporating prior knowledge about particular classes of tasks during learning could dramatically increase the learning speed and final performance. In this talk, I will present several representative industrial use cases; we combine reinforcement learning algorithms with suitable priors so to make robots efficiently learn these complex industrial assembly skills.

Key Takeaways:

  • Deep reinforcement learning can solve complex industrial robotic tasks
  • Incorporating right kind of prior about the task can dramatically boost performance of RL algorithm and sample efficiency

Jianlan Luo is currently a fourth-year Ph.D. candidate at UC Berkeley in Mechanical Engineering Department and a master student at Computer Science department with Professor Pieter Abbeel. His research interests include representation learning, reinforcement learning, robotics, and their intersections.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more