Immersive Conversational Assistants are the Next wave in AI
Our assistants of today are omnipresent and understand everything but textual, what does it mean to move a step further and make them immersive? What does it mean to understand and interact in the modality that the user is in? To make them immersive we need to build systems that can simulate this behavior. This talk will demo and describe one such system, that moves the needle of conversational AI to more situated and immersive settings for data collection. In turn attempting to answer some of the questions around multi-modal AI systems of the future.
Shivani is an machine learning engineer on the Facebook Assistant team working on both the product and research arms around machine learning reasoning for assistants, and multi-modal assistants of the future. Before Facebook, she was at Carnegie Mellon University, where she helped build the CMU Magnus system for social chit chat ground up for the first wave of Amazon Alexa Prize Challenge. She has also published work on modeling user psychology, and building argumentation systems that help in negotiation. Her research background spans across-disciplines such as computer science, psychology and machine learning.