Maithra Raghu

Explainability Considerations for AI Design

Many AI explainability techniques focus on considerations around AI deployment. But another crucial challenge for AI is their complex design process, spanning data, model choices and algorithms for learning. In this discussion, we overview some of the important considerations for explainability to help with AI design. What might explainability in the design process be defined as? What are some of the approaches being developed and their practical takeaways? What are the key open questions looking forwards?

Maithra Raghu is a PhD Candidate in Computer Science at Maithra Raghu is a Senior Research Scientist at Google Brain and finished her PhD in Computer Science at Cornell University. Her research broadly focuses on enabling effective collaboration between humans and AI, from design to deployment. Specifically, her work develops algorithms to gain insights into deep neural network representations and uses these insights to inform the design of AI systems and their interaction with human experts at deployment. Her work has been featured in many press outlets including The Washington Post, WIRED and Quanta Magazine. She has been named one of the Forbes 30 Under 30 in Science, a 2020 STAT Wunderkind, and a Rising Star in EECS.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more