Anna Rohrbach

Explainable AI for Addressing Bias and Improving User Trust

Explainable Artificial Intelligence (or XAI) has a long history of research, while recently it has re-emerged as an active area of investigation in the Deep Learning community. With the growing popularity and success of Machine Learning and, especially, Deep Learning techniques, the community has been striving to open up these “black boxes”. From the engineering perspective, interpretability could provide ways to better understand, debug and improve the models, by exposing unwanted behavior such as e.g., reliance on spurious correlations in the data. From the user perspective, explainability may address the crucial condition of wider adoption, namely trust. In this talk I will first show how visual explanations can help expose harmful biases encoded by humans in the training data or model design in the context of gender prediction in visual captioning. Next, I will talk about our work on explainable and advisable driving models. Here, we develop models that can both, generate textual explanations of their actions, as well as incorporate user advice in the form of observation-action rules.

3 Key Takeaways: 1) Our proposed approach to visual captioning allows us to achieve lower error rate in gender prediction while encouraging the model to “look” at people rather than rely on spurious contextual cues.

2) Making deep models explain their decisions in natural language does not lead to performance degradation and may in fact improve performance.

3) Incorporating human knowledge (or advice) in deep models leads to better performing, more interpretable models that gain higher human trust.

I am a Research Scientist at UC Berkeley, working with Prof. Trevor Darrell. I have completed my PhD at Max Planck Institute for Informatics under supervision of Prof. Bernt Schiele. My research is at the intersection of vision and language. I am interested in a variety of tasks, including image and video description, visual grounding, visual question answering, etc. Recently, I am focusing on building explainable models and addressing bias in existing vision and language models.

Buttontwitter
This website uses cookies to ensure you get the best experience. Learn more