Towards a Virtual Stuman
Deep reinforcement learning has been an effective methodology for developing control policies for a wide range of motion control tasks. However, the capabilities demonstrated by these methods remain limited compared to the staggering array of skills exhibited by their real world counterparts. These learned policies are also prone to developing unnatural strategies that are at odds with the behaviours observed in humans and other animals. In this talk, I will present a conceptually simple RL framework that enables simulated agents to imitate a rich repertoire of highly dynamic skills from human demonstrations. Our approach is able to reproduce a broad range of skills ranging from locomotion to acrobatics, dancing to martial arts. The policies learn to produce motions that are nearly indistinguishable from motions recorded from human subjects. In addition to training humanoid agents, our framework can also be applied to quadrupeds and other nonhuman morphologies.
Jason Peng is a first year Ph.D student at UC Berkeley, working with Professor Pieter Abbeel and Professor Sergey Levine. His work lies at the intersection of reinforcement learning and computer animation, with an emphasis on motion control for physics-based character simulation. He received a B.Sc and M.Sc in computer science from the University of British Columbia under the supervision of Professor Michiel van de Panne. He is the recipient of the NSERC Post Graduate Scholarship, and the Governor General's Academic Bronze, Silver, and Gold medals.