Lisa Lee

Learning Embodied Agents with Scalably-Supervised Reinforcement Learning

Reinforcement learning (RL) agents learn to perform a task through trial-and-error interactions with an initially unknown environment. Despite the recent progress in deep RL, several unsolved challenges limit the applicability of RL to real-world tasks, including efficient exploration in high-dimensional spaces, learning and data efficiency, and the high cost of human supervision. Towards solving these challenges, this talk focuses on how we can balance self-supervised and human-supervised RL to efficiently train an agent for solving various visual robotic tasks. We address the following questions:

  1. How can we amortize the cost of learning to explore?

  2. How can we learn a semantically meaningful representation for faster exploration and learning?

  3. How can we utilize language to equip deep RL agents with structured priors about the physical world, and enable generalization and knowledge transfer across different tasks?

Lisa Lee is a Research Scientist at Google Brain in the Reinforcement Learning team. She obtained her PhD in Machine Learning from Carnegie Mellon, where she was advised by Ruslan Salakhutdinov and Eric Xing. She graduated summa cum laude with an A.B. in Mathematics from Princeton University, where her undergraduate thesis on word embeddings was advised by Sanjeev Arora. Lisa's research focuses on deep reinforcement learning for robotic control, and training embodied agents that can be deployed in complex environments to solve a wide variety of tasks. She is particularly excited about representation learning for RL; efficient exploration and fast adaptation in multi-task RL; skill learning and planning; and utilizing language to equip visual RL agents with structured priors about the world.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more