Sergey Levine

Chevron down

Deep Robotic Learning

Deep learning has been demonstrated to achieve excellent results in a range of passive perception tasks, from recognizing objects in images to recognizing human speech. However, extending the success of deep learning into domains that involve active decision making has proven challenging, because the physical world presents an entire new dimension of complexity to the machine learning problem. Machines that act intelligently in open-world environments must reason about temporal relationships, cause and effect, and the consequences of their actions, and must adapt quickly, follow human instruction, and remain safe and robust. Although the basic mathematical building blocks for such systems -- reinforcement learning and optimal control -- have been studied for decades, such techniques have been difficult to extend to real-world control settings. For example, although reinforcement learning methods have been demonstrated extensively in settings such as games, their applicability to real-world environments requires new and fundamental innovations: not only does the sample complexity of such methods need to be reduced by orders of magnitude, but we must also study generalization, stability, and robustness. In this talk, I will discuss how deep learning and reinforcement learning methods can be extended to enable real-world robotic control, with an emphasis on techniques that generalize to situations, objects, and tasks. I will discuss how model-based reinforcement learning can enable sample-efficient control, how model-free reinforcement learning can be made efficient, robust, and reliable, and how meta-learning can enable robotic systems to adapt quickly to new tasks and new situations.

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more