Jacob Andreas

Learning to Act by Learning to Describe

The named concepts and compositional operators in natural language are a rich source of information about the kinds of abstractions humans use to interact with the world. Can we use this linguistic background knowledge to build more effective intelligent agents? This talk will explore two problems at the intersection of language and reinforcement learning: using interaction with the world to improve language generation, and using models for language generation to efficiently train reinforcement learners.

Key Takeaways:

  • RL can improve language understanding by making it possible to explicitly optimize for successful communication
  • Natural language can help with more general RL problems by providing a scaffold for learning goal representations and cost functions

Jacob Andreas is an assistant professor at MIT and a senior researcher at Microsoft Semantic Machines. His research focuses on language learning as a window into reasoning, planning and perception, and on more general machine learning problems involving compositionality and modularity. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been the recipient of an NSF graduate fellowship, a Facebook fellowship, and paper awards at NAACL and ICML.

This website uses cookies to ensure you get the best experience. Learn more