Dmitry Kazhdan

Concept-Based Explainability

Currently, the most widely used XAI methods are feature importance methods (also referred to as saliency methods). Unfortunately, feature importance methods have been shown to be fragile to input perturbations, model parameter perturbations, and (crucially) confirmation bias. More recent work on XAI explores the use of concept-based explanations. These approaches provide explanations in terms of human-understandable units, rather than individual features, pixels, or characters (e.g., the concepts of a wheel and a door are important for the detection of cars). In this discussion, I intend to give an overview of the field, emphasising its significance, and discussing the state-of-the-art approaches.

I am a second-year PhD student at The University of Cambridge, focusing on Explainable AI research. I am co-supervised by Prof. Pietro Lió and Prof. Mateja Jamnik. Currently, I am primarily interested in concept-based explainability (CbE) techniques, and their applications. This includes applications CbE to different types of Deep Learning models, such as: GNNs, RNNs, CNNs, and RL models. This also includes applications of CbEs to specific domains, including: medical imaging, drug discovery, and in-hospital mortality prediction.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more