WELCOME & OPENING REMARKS - 8am PST | 11am EST | 4pm GMT
German I Parisi - McD Tech Labs, McDonald's
Toward Lifelong Conversational AI
Conversational agents have become increasingly popular in a wide range of business areas. Prominent examples of applications that have been transforming speech-to-speech interactions are Amazon’s Alexa, Apple’s Siri, and McDonald’s voice-activated drive-thru. Companies from various industries are now exploring new ways of building products and services that rely on robust natural language interactions. A major technical challenge is how these solutions can efficiently incorporate new knowledge and increase performance over time while confining computational cost and addressing the current limitations of artificial learning systems designed to perform best in benchmark datasets. In this talk, I will introduce and discuss state-of-the-art machine learning technology in conversational AI with the ability to acquire, fine-tune, and transfer knowledge from large and continuous streams of data. The systems can learn in correspondence to novel interactions or the necessity to enrich domain-specific knowledge and logic. I will focus on scalable deep learning models for end-to-end natural language understanding and hybrid approaches to lifelong conversational agents in multiple application domains.
German I. Parisi is the Director of Applied AI at McD Tech Labs in Mountain View, California, a Silicon Valley-based research center established by McDonald’s Corporation to advance the state of the art in AI-powered technology systems for customer interaction and support. He is also an independent research fellow of the University of Hamburg, Germany, and the co-founder and board member of ContinualAI, the largest research organization and open community on continual learning for AI with a network of over 600 scientists. He received his Bachelor's and Master's degree in Computer Science from the University of Milano-Bicocca, Italy. In 2017 he received his PhD in Computer Science from the University of Hamburg on the topic of multimodal neural representations with deep recurrent networks. In 2015 he was a visiting researcher at the Cognitive Neuro-Robotics Lab of the Korea Advanced Institute of Science and Technology (KAIST), South Korea, winners of the 2015 DARPA Robotics Challenge. His main research interests include human-robot interaction, continual robot learning, and neuroscience-inspired AI.
Varsha Embar - Cisco
Extracting Conversation Highlights using Dialog Acts
Online messaging platforms and virtual meetings have been a dominant mode of communication for many years now and more so in recent times. However, processing and condensing this data into useful snippets of information is still an on ongoing research problem. In this talk, I will introduce the concept of dialog acts to understand some semantics of these conversations and use them to highlight useful, actionable information. We will talk about dialog act datasets, models and their applications in different modes of conversation like multi-party meetings and chat messages.
Varsha Embar is a Senior Machine Learning Engineer at MindMeld, Cisco, where she builds production level conversational interfaces. She works on improving the core Natural Language Processing platform, including features and algorithms for low-resource settings, and tackles challenging problems such as summarization and action item detection in noisy meeting transcripts. Prior to MindMeld, Varsha earned her Master’s degree in Machine Learning and Natural Language Processing from Carnegie Mellon University.
Sebastian Ruder - DeepMind
Cross-Lingual Transfer Learning
Research in natural language processing (NLP) has seen striking advances in recent years, mainly driven by large pretrained language models. However, most of these successes have been achieved in English and a small set of other high-resource languages. In this talk, I will highlight methods that enable us to scale NLP models to more of the world's 7,000 languages, challenges, and promising future directions.
Sebastian Ruder is a research scientist in the Language team at DeepMind, London. He completed his PhD in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, he studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin.
Conversational AI for Customer Relations
COFFEE & NETWORKING BREAK
Robert Kapitan - OpenText
NLU & Computer Vision
The explosion of user-generated content, endlessly growing interactions between organizations and customers, regulators and companies, federal institutions and citizens, employees and employers are creating both Big Content problems and new business opportunities. Individuals are expressing their needs, suggestions and concerns across a wide variety of content types that can help organizations to make optimal data-driven decisions. At the same time, the amount of available content can be overwhelming and no longer possible to monitor by Content Managers. This is the content that might quickly prove to be harmful for the organization, creating very serious issues including legal consequences as any discriminatory, sexist, racist language used in emails or corporate social media along with inappropriate pieces of text or images should not have a place in the workplace or digital communities. In this session, I'll give an overview of how Natural Language Understanding (NLU) gives a manner to analyze, identify and regroup these opportunities and risks through automated classification, named-entity extraction as well as the analysis of subjectivity, tonality, emotions & intentions within the textual content, and how NLU combined with Computer Vision applications can identify high-risk content.
Robert Kapitan is the Lead Product Manager at OpenText for the AI & Analytics content analytics platform, Magellan Text Mining. Robert has been working with Text Mining and Content Analytic applications for over 20 years helping to build software solutions that will understand human language. He holds an M.A. in Theoretical Linguistics and a PhD in Cognitive Semantics.
Hanna Hajishirzi - University of Washington/Allen Institute for AI
Knowledge-Rich Neural Text Comprehension and Reasoning
Enormous amounts of ever-changing knowledge are available online in diverse textual styles (e.g., news vs. science text) and diverse formats (knowledge bases vs. web pages vs. textual documents). This talk presents the question of textual comprehension and reasoning given this diversity: how can AI help applications comprehend and combine evidence from variable, evolving sources of textual knowledge to make complex inferences and draw logical conclusions? I present question answering and fact checking algorithms that offer rich natural language comprehension using multi-hop and interpretable reasoning. Recent advances in deep learning algorithms, large-scale datasets, and industry-scale computational resources are spurring progress in many Natural Language Processing (NLP) tasks, including question answering. Nevertheless, current models lack the ability to answer complex questions that require them to reason intelligently across diverse sources and explain their decisions. Further, these models cannot scale up when task-annotated training data are scarce and computational resources are limited. With a focus on textual comprehension and reasoning, this talk will present some of the most recent efforts in my lab to integrate capabilities of symbolic AI approaches into current deep learning algorithms. I will present interpretable algorithms that understand and reason about textual knowledge across varied formats and styles, generalize to emerging domains with scarce training data (are robust), and operate efficiently under resource limitations (are scalable).
Hanna Hajishirzi is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a Research Fellow at the Allen Institute for AI. Her research spans different areas in NLP and AI, focusing on developing machine learning algorithms that represent, comprehend, and reason about diverse forms of data at large scale. Applications for these algorithms include question answering, reading comprehension, representation learning, knowledge extraction, and conversational dialogue. Honors include the Sloan Fellowship, Allen Distinguished Investigator Award, multiple best paper and honorable mention awards, and several industry research faculty awards. Hanna received her PhD from University of Illinois and spent a year as a postdoc at Disney Research and CMU.
BREAKOUT SESSIONS: Roundtable Discussions with Speakers
PANEL: Addressing the Future of Conversational AI
Maria Crosas Batista - Nestlé
Data journalist and subject matter expert on conversational interfaces. Responsible for exploring new conversational A.I. technologies for 80+ Nestlé global markets. Working directly with senior business stakeholders to design, develop and launch multiple consumer focused chatbots on the Nespresso, Nescafé Dolce Gusto, Maggi and Nestlé Infant Nutrition brands platforms.
Amir Tahmasebi - Enlitic
Natural Language Processing for Healthcare
With recent advancements in Deep Learning followed by successful deployment in natural language processing (NLP) applications such as language understanding, modeling, and translation, the general hope was to achieve yet another success in healthcare domain. Given the vast amount of healthcare data captured in Electronic Medical Records (EMR) in an unstructured fashion, there is an immediate high demand for NLP to facilitate automatic extraction and structuring of clinical data for decision support. Nevertheless, the performance of off-the-shelf NLP on healthcare data has been disappointing. Recently, tremendous efforts have been dedicated by NLP research pioneers to adapt general language NLP for healthcare domain. This talk aims to review current challenges researchers face, and furthermore, reviews some of the most recent success stories.
3 Key Takeaways: *General overview of state-of-the-art NLP
*How to build a domain-specific NLP pipeline for life science applications
*Review of a few successful applications of NLP in life sciences and how the future will/should look
Amir Tahmasebi is the director of Deep Learning at Enlitic, San Francisco, CA. Before joining Enlitic, Amir was the senior director of Machine Learning and AI at CODAMETRIX, Boston, MA. He also served as a lecturer at MIT, Northeastern University, Boston University, and Columbia University. Prior to CODAMETRIX, Dr. Tahmasebi was a Principal Research Engineer at PHILIPS HealthTech, Cambridge, MA. Dr. Tahmasebi’s research is focused on innovating computer vision and natural language processing solutions for patient clinical context extraction and modeling, clinical outcome analytics and clinical decision support. Dr. Tahmasebi received his PhD degree in Computer Science from the School of Computing, Queen's University, Canada. He is the recipient of the IEEE Best PhD Thesis award and Tanenbaum Post-doctoral Research Fellowship award. He has been serving as area chair for MICCAI and IPCAI conferences. Dr. Tahmasebi has published and presented his work in a number of conferences and journals including NeurIPS, NAACL, MICCAI, IPCAI, IEEE TMI, SPIE, and RSNA. He has also been granted more than 15 patent awards.
MAKE CONNECTIONS: Meet with Attendees Virtually for 1:1 Conversations and Group Discussions over Similar Topics and Interests
END OF SUMMIT