Nicholas Frosst

Outlines, Explanation, and Deflecting Adversarial Examples

Adversarial examples have been a topic of interest since they were first discovered. They illustrate an interesting failure mode in neural networks. Many researchers have attempted to solve this problem by creating detection methods. However these mechanisms are inevitable broken shortly after release by a defense aware attack. One approach to getting ahead of this cycle is to create a model that when adversarially attacked, yields inputs that resemble the target class, thereby deflecting the attack. I will talk about recent work done under Geoff Hinton at google brain that takes such an approach.

Nicholas Frosst is a research engineer working at Google brain in Geoff Hinton's Toronto brain team. He received his undergraduate from the University of Toronto in computer and cognitive science. He focuses on capsules networks, adversarial examples and understanding representation space.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more