Chandra Khatri

Towards End to End Spoken Language Understanding: Contextual Models for Joint ASR Correction and Language Understanding

The quality of automatic speech recognition (ASR) is critical to AI Assistants as ASR errors propagate to and directly impact downstream tasks such as language understanding (LU) and dialog management. In this talk, I will go over multi-task neural approaches to perform contextual language correction on ASR outputs jointly with LU to improve the performance of both tasks simultaneously. I will share the results obtained using state-of-the-art Generalized Pre-training (GPT) Language Models based joint ASR correction and language understanding tasks.

Chandra Khatri is a Senior AI Scientist at Uber AI driving Conversational AI efforts at Uber. Prior to Uber, he was the Lead AI Scientist at Alexa and was driving the Science for the Alexa Prize Competition, which is a $3.5 Million university competition for advancing the state of Conversational AI. Some of his recent work involves Open-domain Dialog Planning and Evaluation, Conversational Speech Recognition, Conversational Natural Language Understanding, and Sequential Modeling.

Prior to Alexa, Chandra was a Research Scientist at eBay, wherein he led various Deep Learning and NLP initiatives such as Automatic Text Summarization and Automatic Content Generation within the eCommerce domain, which has lead to significant gains for eBay. He holds degrees in Machine Learning and Computational Science & Engineering from Georgia Tech and BITS Pilani.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more