Nicolas Papernot

Chevron down

Security and Privacy in Machine Learning

There is growing recognition that machine learning exposes new security and privacy issues in software systems. In this talk, we first expose the attack surface of systems deploying machine learning. We then describe how an attacker may force models to make wrong predictions with very little information about the victim. We demonstrate that these attacks are practical against existing machine learning as a service platforms. Finally, we discuss a framework for learning privately. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

Nicolas Papernot is a PhD student in Computer Science and Engineering working with Dr. Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Graphcoreai.001
Ibm watson health 3.001
Facebook.001
Acc1.001
Rbc research.001
Twentybn.001
Forbes.001
Maluuba 2017.001
Mit tech review.001
Kd nuggets.001