Clara Bove-Ziemann

PRESENTATION: How to Explain an ML Prediction to Non-Expert Users?

Machine Learning has provided new business opportunities in the insurance industry, but its adoption is for now limited by the difficulty to explain the rationale behind the prediction provided. In our latest research, we explore how we can enhance one type of explanations we can extracted from interpretability method called local feature importance explanations for non-expert users. We propose design principles to present these explanations to non-expert users and are applying them to a car insurance smart pricing interface. We present preliminary observations collected during a pilot study using an online A/B test to measure objective understanding, perceived understanding and perceived usefulness of our designed explanations.

ROUNDTABLE: Challenges of ML Interpretability and Business Opportunities

The rise of Machine Learning (ML) has provided new business opportunities in the insurance industry. ML can for instance help improve pricing strategies, fraud detection, claim management or the overall customer experience. Yet, its adoption can be for now limited by the difficulty for ML to explain the rationale behind predictions. What can be explained from ML models? What do people need to be explained of? How to present explanations? These are some challenges we want to address in ML Interpretability

I am currently working as a Researcher at AXA and am a PhD Candidate at Laboratoire Informatique de Paris 6 (LIP6). I conduct research on eXplainable AI (XAI) and User Experience (UX) in Machine Learning. I graduated from a Design Master’s Degree in 2015 and works for several years as a User Experience Designer in various fields before starting research on Human+AI interactions.

Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more