Making Deep Neural Networks more Robust
Deep Neural Networks are very accurate at classifying images, but they lack the robustness guarantees of traditional, but less effective machine learning techniques. In particular, they are vulnerable to adversarial examples, and their predictions do not come with confidence estimates. The lack of robustness is an obstacle to using these models in domains where errors can be costly. In this talk we will show how mathematical tools which have been very popular in image processing can be adapted to give state of the art robustness as well as robustness guarantees for neural network models.
Adam Oberman is a professor in the department of Mathematics and Statistics at McGill University. He got his bachelor’s degree at University of Toronto and PhD at the University of Chicago, and was previously faculty at Simon Fraser University. His research prior 2017 was on partial differential equations, scientific computing, and optimal transportation. During a Simon’s Fellowship at UCLA, he started a project applying PDEs to Deep Learning and is now working on adversarial robustness for DNNs.