Deep Understanding: Steps towards Interpreting the Internals of Neural Networks
Deep Neural Networks have been extraordinarily successful in a variety of tasks, from Computer Vision to Natural Language Processing to playing Go. However, understanding and interpreting how they work has lagged behind, making faster and more principled methods of development extremely challenging. In this talk, I outline some of the approaches being developed to address this fundamental gap.
Maithra is a PhD student in the Department of Computer Science, Cornell University, advised by Jon Kleinberg. Her research interests include AI, Machine Learning (particularly Deep Learning) and Theory. Her broad research goal is to better bridge the gap between theory and practice, especially in Machine Learning. Presently, she is looking at bringing greater interpretability to empirical observations in Deep Learning with a mixture of experiments and theoretical analysis. Before Cornell, she was at the University of Cambridge (Trinity College) where she completed her Bachelors and Masters (Part III of the Tripos) in Mathematics. Maithra has completed to internships with Google: Brain Team and also Google: Research and Machine Intelligence.