Recent advances in machine learning applied to large text corpora have enabled strong results in natural language processing by capturing statistical patterns between words. While such approaches are useful, they are arguably insufficient for building general-purpose agents that can interact with humans, as the words lack grounding in an external environment. We present new research from OpenAI that investigates the emergence of a simple grounded language. Using methods from deep reinforcement learning, we show that compositional language can emerge when agents cooperate to solve various tasks in an environment, such as moving to or pushing objects. We also detail ongoing research focused on teaching these agents to speak simple forms of English.
Ryan is a Ph.D. student in the Reasoning & Learning Lab at McGill University, supervised by Joelle Pineau. He is currently interning at OpenAI, where he is working on the emergence of language in multi-agent systems. He previously investigated deep learning methods for dialogue systems, deriving the popular Ubuntu Dialogue Corpus and establishing the poor performance of automatic dialogue evaluation methods. When in Montreal, he co-organizes the Montreal AI Ethics Group, and is an editor of the AI series for Graphite Publications. Before McGill, he spent time at the Institute for Quantum Computing, the Max Planck Institute, and the National Research Council.