Towards Scalable Multi-domain Conversational Agents
Large-scale virtual assistants, like Google Assistant, Amazon Alexa, Apple Siri etc. help users to accomplish a wide variety of tasks. They need to integrate with a large and constantly increasing number of services or APIs over a wide variety of domains. Supporting new services with ease, without retraining the model, and reducing maintenance workload are necessary to accommodate future growth. To highlight these challenges, we recently released the Schema-Guided dialogue dataset, which is the largest publicly available corpus of task-oriented dialogues. In this talk, I will describe the methodology of creation of this dataset which minimizes the need for complex manual annotation, while considerably reducing the time and cost of data collection. As a solution to the above challenges, I will also introduce the schema-guided approach for building virtual assistants, which utilizes a single model across all services and domains, with no domain-specific parameters.
Abhinav Rastogi is a Senior Software Engineer at Google Research, working on dialogue systems. His research interests include natural language understanding, language generation and multimodal dialogue. Previously, Abhinav was at Stanford University, where he worked with Prof. Andrew Ng on video understanding and Prof. Christopher Manning on natural language inference. Abhinav holds degrees in Electrical Engineering from Stanford University and IIT Bombay.