Presentation: Explainable AI for Addressing Bias and Improving User Trust
Explainable Artificial Intelligence (or XAI) has a long history of research, which has re-emerged as an active area in Deep Learning. With the growing popularity and success of Machine Learning and, especially, Deep Learning techniques, the community has been striving to open these “black boxes”. This session will show you how visual explanations can help expose harmful biases encoded by humans in the training data or model design.
Kai Xin Thia works at the intersection of data and product innovation. Over the last ten years, he has led data teams to develop machine learning products such as deep learning sentiment models, knowledge graphs, recommender systems, segmentation & targeting across industries like finance, media, jobs, eCommerce, and healthcare.
He holds a master's degree in computer science, specializing in interactive intelligence, designing systems where artificial intelligence and human intelligence can coexist harmoniously and thrive. He also co-founded DataScience SG, a data community with 10,000+ members that held 80+ meetups over the last eight years, and AI Professionals Association (AIP) for engineers and professionals working in AI-related roles.