Adversarial Machine Learning: Ensuring Security of ML models and Sensitive Data
As machine learning (ML) has seen dramatic growth in industrial applications, so have we begun to question what trust and security mean in the context of ML. I will give an overview of adversarial ML as a research area and explore some of the attack and defense strategies that have been developed in recent literature. In particular, I will showcase some of the use cases and implementations of differential privacy and how it can be used to protect sensitive data used for training ML models.
Christopher is a researcher in the CleverHans Lab at the Vector Institute exploring Adversarial ML, and in particular, membership inference attacks, differential privacy, and adversarial examples. He is also a researcher with the Aspuru-Guzik lab at the Vector Institute exploring the applications of Bayesian models and active learning in molecular discovery. Christopher has worked at Georgian Partners LP, where he developed open source solutions for differential privacy and AutoML. Christopher also worked at Intel where he researched and developed a deep neural network bug triager.