Just Ask: An Interactive Learning Framework for Vision and Language Navigation
In the vision and language navigation task, the agent may encounter ambiguous situations that are hard to interpret by just relying on visual information and natural language instructions. We propose an interactive learning framework to endow the agent with the ability to ask for users' help in such situations. As part of this framework, we investigate multiple learning approaches for the agent with different levels of complexity. The simplest model-confusion-based method lets the agent ask questions based on its confusion, relying on the predefined confidence threshold of a next action prediction model. To build on this confusion-based method, the agent is expected to demonstrate more sophisticated reasoning such that it discovers the timing and locations to interact with a human. We achieve this goal using reinforcement learning (RL) with a proposed reward shaping term, which enables the agent to ask questions only when necessary. The success rate can be boosted by at least 15% with only one question asked on average during the navigation. Furthermore, we show that the RL agent is capable of adjusting dynamically to noisy human responses. Finally, we design a continual learning strategy, which can be viewed as a data augmentation method, for the agent to improve further utilizing its interaction history with a human. We demonstrate the proposed strategy is substantially more realistic and data-efficient compared to previously proposed pre-exploration techniques.
Seokhwan Kim is currently a senior machine learning scientist at Amazon Alexa AI. He received his Ph.D. from Pohang University of Science and Technology. Prior to joining Amazon, he conducted work in natural language understanding and spoken dialog systems where he was a Research Scientist at Adobe Research and the Institute for Infocomm Research. He has authored more than 50 peer-reviewed publications in international journals and conferences in speech and language technology areas. In 2015, he joined the organizing team of Dialog System Technology Challenge (DSTC) and has contributed to the last five challenges. In addition, he has been acting as a program committee member of the major conferences in NLP, speech, dialog, and AI fields including ACL, NAACL-HLT, EMNLP, ICASSP, Interspeech, IWSDS, AAAI, IJCAI, and ICLR.