• Time in EST

  • 11:00
    Anusha Sethuraman

    WELCOME & INTRODUCTION

    Anusha Sethuraman - VP of Marketing - Fiddler

    Down arrow blue

    Anusha Sethuraman is a technology product marketing executive with over 12 years of experience across various startups and big-tech companies like New Relic, Xamarin, and Microsoft. She’s taken multiple new B2B products to market successfully with a focus on storytelling and thought leadership. She’s passionate about AI ethics and building AI responsibly, and works with organizations like ForHumanity and Women in AI Ethics to help build better AI auditing systems. She’s currently at Fiddler AI, a Model Performance Management and Explainable AI startup, as VP of Marketing.

    Twitter Linkedin
  • 11:05
    Tristan Ferne

    The Many Ways to Explain AI

    Tristan Ferne - Lead Producer, Internet Research & Future Services Team - BBC R&D

    Down arrow blue

    The Many Ways to Explain AI

    How and why can we explain how AI and machine learning work? AI is a complex and often invisible technology, yet increasingly influential on peoples' lives, society and the world. Better explanations and understanding of AI should be beneficial to many, but also brings many challenges. I'll explore why we might want to explain AI to consumers, why it's hard to do, and look at the different places where we could intervene to help increase understanding of AI.

    Tristan is an Executive Producer at BBC Research & Development where he tries to invent the future of media and technology. Starting with a degree in Cybernetics in the 1990s and working on neural networks before they were cool (and indeed, before they worked very well), he became an engineer, developer, producer and product manager at the BBC where he has helped develop influential prototypes and concepts. He thinks great things come from a creative combination of technology and design.

    Twitter Linkedin
  • 11:30
    Rishabh Mehrotra

    Explore, Exploit, and Explain: Role of Explanation & Attribution in Multi-stakeholder Marketplaces

    Rishabh Mehrotra - Senior Research Scientist - Spotify

    Down arrow blue

    Explore, Exploit, and Explain: Role of Explanation & Attribution in Multi-stakeholder Marketplaces

    The multi-armed bandit is an important framework for balancing exploration with exploitation in recommendation. Exploitation recommends content (e.g., products, movies, music playlists) with the highest predicted user engagement and has traditionally been the focus of recommender systems. Exploration recommends content with uncertain predicted user engagement for the purpose of gathering more information. In parallel, explaining recommendations (“recsplanations”) is crucial if users are to understand their recommendations. Existing work has looked at bandits and explanations independently. We provide the first method that combines both in a principled manner. In particular, our method is able to jointly (1) learn which explanations each user responds to; (2) learn the best content to recommend for each user; and (3) balance exploration with exploitation to deal with uncertainty. Towards the end, we allude to recent advances in multi-objective modeling and outline key issues around explanations & attribution in multi-objective recommendations.

    Rishabh Mehrotra is a Senior Research Scientist at Spotify Research in London. He obtained his PhD in the field of Machine Learning and Information Retrieval from University College London where he was partially supported by a Google Research Award. His PhD research focused on inference of search tasks from search & conversational interaction logs. His current research focuses on machine learning for marketplaces, bandit based recommendations, counterfactual analysis and experimentation. Some of his recent work has been published at conferences including KDD, WWW, SIGIR, NAACL, RecSys and WSDM. He has co-taught a number of tutorials at leading conferences, and summer schools.

    Twitter Linkedin
  • 11:55
    Cynthia Rudin

    Stop Explaining Black-Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

    Cynthia Rudin - Professor of Computer Science - Duke University

    Down arrow blue

    Stop Explaining Black-Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

    With the widespread use of machine learning, there have been serious societal consequences from using black-box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. I will give several reasons why we should use interpretable models, the most compelling of which is that for high stakes decisions, interpretable models do not seem to lose accuracy over black boxes - in fact, the opposite is true, where when we understand what the models are doing, we can troubleshoot them to ultimately gain accuracy.

    Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award and a fellow of the American Statistical Association and the Institute of Mathematical Statistics.

    Twitter
  • 12:20

    NETWORKING BREAK: Visit the Expo Booths

  • 12:30
    Walter Crismareanu

    General-Purpose AI: A Digital Bio-Brain with an Artificial Nervous System

    Walter Crismareanu - Founder & CEO - Tipalo

    Down arrow blue

    General-Purpose AI: A Digital Bio-Brain with an Artificial Nervous System

    Companies use the term AI, even if in fact they use statistical methods to create large databases. AI means Artificial Intelligence, but until today, there is no definition of what intelligence even is, biological or other. We urgently need to reassess the logic of biological functions in the brain, in order to translate them into corresponding technological products. If human intelligence can be digitally reproduced, then intelligence should be portable in both ways, from human to AI and back. First understand the basic principles, afterwards develop a corresponding theory about biological intelligence, at the end create a new technology, from inception to production, overcoming many obstacles.

    3 Key Takeaways from this session:

    • A general-purpose AI with an embedded Self-Learning mechanism

    • Real-time operating system written in VHDL

    • ANS - Artificial Nervous System

    Walter Crismareanu is an IT expert with 40 years of international experience in Europe, Asia and the USA. He worked as senior developer, development manager, technical project leader, trainer, designer and IT architect. He also wrote several papers and books about the new IT technology regarding machine-based logic. He is the founder and CEO of Tipalo GmbH, the Swiss company which develops the new technology. He speaks 5 languages and his main hobbies are ancient civilizations, philosophy and psychology.

    Twitter
  • 12:50
    Rachel Alexander

    Interpretable AI and XAI for Life Sciences

    Rachel Alexander - CEO & Founder - Omina Technologies

    Down arrow blue

    Interpretable AI and XAI for Life Sciences

    In the highly-regulated life sciences industry, using interpretable methods and XAI not only facilitates assessing compliance but also facilitates devising the best possible action from the model's prediction. Omina Technologies combined an interpretable method with XAI to 1, identify which health care practitioners are most likely to switch and why? We combine an interpretable method GA2M with SHapley Additive exPlanations. And 2, optimize sales and marketing to prevent brand switching and increase the sales of the branded drug. Partial dependence plots provide insight into the marginal effect of adding an extra sales call on sales.

    3 Key Takeaways:

    •Interpretability and XAI to enable taking the right actions based on the model predictions

    •Tutorial on how to combine interpretable method with XAI to facilitate business validation of AI solution and identify the best brand-switching prevention strategy for an international pharmaceutical company

    • Best practices on using interpretable methods and XAI in life sciences

    Roundtable Discussion: When to Use Interpretable AI and When to use Explainable AI?

    The session aims to cover:

    • How to decide the most appropriate interpretable method(s) for a given business context and stakeholder?

    • How to decide the most appropriate type of explanation (global/local, contrastive, attribution-based/feature importance, counterfactual, etc.) for a given business context and stakeholder?

    Rachel Alexander is CEO and founder of Omina Technologies, an AI company dedicated to ethical AI solutions, and recently won the award for ‘Artificial Intelligence Person of the Year’ in Belgium. Rachel is responsible for the technological vision and strategy of Omina Technologies. During her physics studies at Indiana University she became fascinated by the world of artificial intelligence and machine learning. Rachel has lived and worked in Belgium for the past twenty years devoting herself to helping companies navigate new technological advances and incorporating them seamlessly into their strategy. Before founding Omina Technologies Rachel worked as Global IT Director at Studio100 and in the management of CSC (Computer Sciences Corporation).

    Twitter Linkedin
  • 13:15

    BREAKOUT SESSIONS: Roundtable Discussions with Speakers

  • Maithra Raghu

    ROUNDTABLE - Explainability Considerations for AI Design

    Maithra Raghu - Research Scientist - Google Brain

    Down arrow blue

    Explainability Considerations for AI Design

    Many AI explainability techniques focus on considerations around AI deployment. But another crucial challenge for AI is their complex design process, spanning data, model choices and algorithms for learning. In this discussion, we overview some of the important considerations for explainability to help with AI design. What might explainability in the design process be defined as? What are some of the approaches being developed and their practical takeaways? What are the key open questions looking forwards?

    Maithra Raghu is a PhD Candidate in Computer Science at Maithra Raghu is a Senior Research Scientist at Google Brain and finished her PhD in Computer Science at Cornell University. Her research broadly focuses on enabling effective collaboration between humans and AI, from design to deployment. Specifically, her work develops algorithms to gain insights into deep neural network representations and uses these insights to inform the design of AI systems and their interaction with human experts at deployment. Her work has been featured in many press outlets including The Washington Post, WIRED and Quanta Magazine. She has been named one of the Forbes 30 Under 30 in Science, a 2020 STAT Wunderkind, and a Rising Star in EECS.

    Twitter Linkedin
  • Rachel Alexander

    ROUNDTABLE - When to use Interpretable AI and When to Use Explainable AI?

    Rachel Alexander - CEO & Founder - Omina Technologies

    Down arrow blue

    Interpretable AI and XAI for Life Sciences

    In the highly-regulated life sciences industry, using interpretable methods and XAI not only facilitates assessing compliance but also facilitates devising the best possible action from the model's prediction. Omina Technologies combined an interpretable method with XAI to 1, identify which health care practitioners are most likely to switch and why? We combine an interpretable method GA2M with SHapley Additive exPlanations. And 2, optimize sales and marketing to prevent brand switching and increase the sales of the branded drug. Partial dependence plots provide insight into the marginal effect of adding an extra sales call on sales.

    3 Key Takeaways:

    •Interpretability and XAI to enable taking the right actions based on the model predictions

    •Tutorial on how to combine interpretable method with XAI to facilitate business validation of AI solution and identify the best brand-switching prevention strategy for an international pharmaceutical company

    • Best practices on using interpretable methods and XAI in life sciences

    Roundtable Discussion: When to Use Interpretable AI and When to use Explainable AI?

    The session aims to cover:

    • How to decide the most appropriate interpretable method(s) for a given business context and stakeholder?

    • How to decide the most appropriate type of explanation (global/local, contrastive, attribution-based/feature importance, counterfactual, etc.) for a given business context and stakeholder?

    Rachel Alexander is CEO and founder of Omina Technologies, an AI company dedicated to ethical AI solutions, and recently won the award for ‘Artificial Intelligence Person of the Year’ in Belgium. Rachel is responsible for the technological vision and strategy of Omina Technologies. During her physics studies at Indiana University she became fascinated by the world of artificial intelligence and machine learning. Rachel has lived and worked in Belgium for the past twenty years devoting herself to helping companies navigate new technological advances and incorporating them seamlessly into their strategy. Before founding Omina Technologies Rachel worked as Global IT Director at Studio100 and in the management of CSC (Computer Sciences Corporation).

    Twitter Linkedin
  • Priti Padhy

    ROUNDTABLE - Why Context and Explainability Will Drive the Next Wave of AI

    Priti Padhy - Co-Founder & CEO - Cognino

    Down arrow blue

    Why Context and Explainability is Next Wave of AI

    Statistical learning including deep learning approaches has shown great business outcome, with varying degree of limitation. In this session you will learn the next wave of AI, that can connect billions of data points using casual inferences and provides human-like contextual understanding and explainability while comprehending the changing world of events and provides an “intelligent core” for the organization.

    3 Key Takeaways:

    · Why context and explainability is the next wave of AI

    · Explainable AI is not an option in a regulated world

    · A new AI operating model can lead to Intelligent enterprise’s

    Priti Padhy, is the co-founder and CEO at Cognino, an Artificial Intelligence (AI) first organisation. Cognino have innovated a unique explainable AI engine that adapts, learns and explains outcomes from a vast amount of data and creates knowledge. An entrepreneurial technologist at heart, Priti has been building products, incubating high-value complex and disruptive technology innovation with global engineering talents over the past 26 years. Priti has led one of the data and AI research group at Microsoft for more than a decade. Prior to that Priti was CTO at KPMG and has also worked with RBS, Fujitsu and Atomic energy of India as a scientist. Priti holds an Advance AI degree from MIT and a BE in Computer Science & Engineering from Utkal University.

    Twitter Linkedin
  • Dmitry Kazhdan

    ROUNDTABLE - Concept-Based Explainability

    Dmitry Kazhdan - PhD Student - University of Cambridge

    Down arrow blue

    Concept-Based Explainability

    Currently, the most widely used XAI methods are feature importance methods (also referred to as saliency methods). Unfortunately, feature importance methods have been shown to be fragile to input perturbations, model parameter perturbations, and (crucially) confirmation bias. More recent work on XAI explores the use of concept-based explanations. These approaches provide explanations in terms of human-understandable units, rather than individual features, pixels, or characters (e.g., the concepts of a wheel and a door are important for the detection of cars). In this discussion, I intend to give an overview of the field, emphasising its significance, and discussing the state-of-the-art approaches.

    I am a second-year PhD student at The University of Cambridge, focusing on Explainable AI research. I am co-supervised by Prof. Pietro Lió and Prof. Mateja Jamnik. Currently, I am primarily interested in concept-based explainability (CbE) techniques, and their applications. This includes applications CbE to different types of Deep Learning models, such as: GNNs, RNNs, CNNs, and RL models. This also includes applications of CbEs to specific domains, including: medical imaging, drug discovery, and in-hospital mortality prediction.

    Twitter Linkedin
  • 13:40

    PANEL: The Future of Explainable AI: What is the Business Impact of XAI, Accountability, and Transparency?

  • Mary Reagan

    Moderator

    Mary Reagan - Data Scientist - Fiddler

    Down arrow blue

    Mary is currently a Data Scientist at Fiddler. Mary completed her PhD in Mineral Physics from Stanford University. Her thesis focused on understanding the effects of high-pressures and temperatures on iron compound’s spin state, deformation, and isotope fractionation. She joins us from DataKind, where she partnered with the NGO, “Humans Against Trafficking”. There, she worked on developing a ML model that identifies teens who are vulnerable to being groomed for trafficking through social media.

    Twitter Linkedin
  • Merve Hickok

    Panellist

    Merve Hickok - Founder - AIEthicist

    Down arrow blue

    Merve Hickok is the founder of AIEthicist and Lighthouse Career Consulting. She is an independent consultant & trainer focused on capacity building in ethical and responsible AI, governance of AI systems. Merve is a Senior Researcher Center for AI & Digital Policy; founding editorial board member of Springer Nature AI & Ethics journal: one of 100 Brilliant Women in AI Ethics 2021; Fellow at ForHumanity Center; a regional lead for Women in AI Ethics Collective; and a member in a number of IEEE & IEC work groups that set global standards for autonomous systems.

    Twitter Linkedin
  • Sara Hooker

    Panellist

    Sara Hooker - Research Scholar - Google Brain

    Down arrow blue

    Sara Hooker is a research scholar at Google Brain doing deep learning research on reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, predictive uncertainty, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She grew up in Africa, in Mozambique, Lesotho, Swaziland, South Africa, and Kenya. Her family now lives in Monrovia, Liberia.

    Twitter Linkedin
  • Narine Kokhlikyan

    Panellist

    Narine Kokhlikyan - Research Scientist - Facebook

    Down arrow blue

    Narine is a Research Scientist at Facebook AI focusing on explainable AI. She is the main creator of Captum, the PyTorch library for model interpretability. Narine studied at the Karlsruhe Institute of Technology in Germany and was a Research Visitor at Carnegie Mellon University. Her research focuses on explainable AI, cognitive systems, and natural language processing. She is also an enthusiastic contributor of open source software packages such as scikit-learn and Apache Spark.

    Twitter
  • 14:15

    MAKE CONNECTIONS: Meet with Attendees Virtually for 1:1 Conversations and Group Discussions over Similar Topics and Interests

  • 14:30

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more