Anh Nguyen

Understanding Deep Neural Networks

Understanding a deep learning model's inner-workings and decisions is increasingly important, especially for life-critical applications e.g. in medical diagnosis or criminal justice. In this talk, I will discuss our recent findings of some interesting failures of state-of-the-art image classifiers. For example, simply randomly rotating and randomly placing a familiar, training-set object in front of the camera is sufficient to bring the classification accuracy from 77.5% down to 3%. Such notorious brittleness of neural networks, therefore, begs for better explanations of why a model makes a certain decision. In this quest, I will share some recent work showing that interpretability methods are unreliable, being sensitive to hyperparameters and how harnessing generative models to synthesize counterfactual intervention samples can improve the robustness and accuracy of the attribution methods.

Anh completed his Ph.D. in 2017 at the University of Wyoming, working with Jeff Clune and Jason Yosinski. His current research focus is Deep Learning, specifically explainable artificial intelligence and generative models. He has also worked as an ML research intern at Apple and Geometric Intelligence (now Uber AI Labs), and Bosch. Anh’s research has won 3 Best Paper Awards at CVPR, GECCO, ICML Visualization workshop, respectively, and 2 Best Research Video Awards at IJCAI and AAAI, respectively.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more