In this talk, I will present our approach for investigating how machine learning models leak information about the individual data records on which they were trained. My focus will be on the fundamental membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. I will demonstrate how to build a successful inference attack on different classification models e.g., trained by commercial "machine learning as a service" providers such as Google and Amazon.
Reza Shokri is a postdoctoral researcher at Cornell University. His research focuses on data and computational privacy for a variety of applications, from location-based services and recommender systems to web search and machine learning. His work on quantifying location privacy was recognized as a runner-up for the annual Award for Outstanding Research in Privacy Enhancing Technologies (PET Award). Recently, he has focused on privacy-preserving generative models for synthetic data, and privacy in machine learning.