Cynthia Rudin

Stop Explaining Black-Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead

With the widespread use of machine learning, there have been serious societal consequences from using black-box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. I will give several reasons why we should use interpretable models, the most compelling of which is that for high stakes decisions, interpretable models do not seem to lose accuracy over black boxes - in fact, the opposite is true, where when we understand what the models are doing, we can troubleshoot them to ultimately gain accuracy.

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award and a fellow of the American Statistical Association and the Institute of Mathematical Statistics.

Buttontwitter
This website uses cookies to ensure you get the best experience. Learn more