LIME - A Step Towards Better Interpretability of Deep Neural Networks
In this Deep Dive session we will take a look at some of the research done in deep neural networks. The current lack of interpretability of deep networks makes it very difficult to use in critical industries like healthcare, finance where black box is a no go. A lot of research has been recently done on how to interpret these high dimensional feature interactions in latent space. We will do a deep dive on LIME which is a recent model that provides local model interpretability. The output of LIME is a list of features that contributed to the model. It also helps determine which features helped changed the prediction of the model. We will discuss an implementation as well as the challenges still being faced with this approach.
Roshini has a background in AI and electronics from Edinburgh university. She has more than eight years of experience in applying machine learning techniques to design scalable robust solutions in the fields of e-commerce, travel and finance. She has worked with modelling user behaviour, preditive models, recommendation systems, generative adversarial models with deep learning frameworks and is currently working on models in finance to assess risk. She is very interested in understanding how AI techniques can be applied in various industries to make them more efficient and accurate. She is also very passionate about encouraging more women to enter and lead in this field and runs the London chapter for women in machine learning and data science. In her free time she dabbles with creating artwork with neural style transfer and travel photography to show how easily AI can be integrated with day to day activities and enhance our creativity.