Control Algorithms for Imitation Learning from Observation
Imitation learning is a paradigm that enables autonomous agents to capture behaviors that are demonstrated by people or other agents. Effective approaches, such as Behavioral Cloning and Inverse Reinforcement Learning, tend to rely on the learning agent being aware of the low-level actions being demonstrated. However, in many cases, such as videos or demonstrations from people (or any agent with a different morphology), the learning agent only has access to observed state transitions. This talk introduces two novel control algorithms for imitation learning from observation: Behavioral Cloning from Observation (BCO) and Generative Adversarial Imitation from Observation (GAIfO).
I am the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, as well as associate department chair and chair of the University's Robotics Portfolio Program
I am also the President, COO, and co-founder of Cogitai, Inc.
My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, autonomic computing, and social agents.