Adding Structure to Spoken Language Understanding
Typically spoken language understanding(SLU) systems factor the language understanding problem into intent classification and slot tagging. This presents an inherent limitation for more complex linguistic phenomenon like coordination where the user expresses multiple intents in a statement. In this talk we present 2 ways in which we can add structure to SLU. First, we talk about incorporating an external parser to solve coordination if we want to extend a legacy system. Second we pose SLU as a shallow semantic parsing problem which is also able to handle tasks like Question answering. We also talk about solving the data sparsity issue by doing transfer learning between domains and by using techniques like delexicalization and copy mechanism.
Rahul Goel is a machine learning scientist at Alexa AI where he works on improving spoken language understanding and dialog systems. Many of his contributions are currently deployed in Alexa. His research interests include dialog systems, language understanding, deep learning, and social computing. Before joining Amazon he was a graduate student at Georgia Tech working with Dr. Jacob Eisenstein on computational social science.