Title: Adversarial Attacks Against Machine Learning Systems

Abstract: I'll give an overview of recent work in the community on attacks against ML systems with a focus on computer vision. Modern computer vision systems are remarkably accurate on tasks they are trained on, e.g., face recognition, but surprisingly brittle to adversarial attacks. Adversarial attacks are specially crafted inputs aimed to fool the model (think of optical illusions); sometimes an imperceptible (to a human) change to an input image can cause the model to completely and confidently change its prediction. This has important reliability, fairness, and security implications as ML systems are being deployed at scale across applications in medicine, autonomous vehicles, robotics, manufacturing, etc. I'll describe the underlying mathematical models and approaches for creating and defending against such attacks.


  • OpenAI blog on adversarial machine learning: https://blog.openai.com/adversarial-example-research/
  • Deep learning and security workshop, IEEE S&P workshop 2018: https://www.ieee-security.org/TC/SPW2018/DLS/#

Bio: Subhransu Maji is an Assistant Professor in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. Previously, he was a Research Assistant Professor at TTI Chicago, a philanthropically endowed academic computer science institute in the University of Chicago campus. He obtained his Ph.D. from the University of California, Berkeley in 2011, and B.Tech. in Computer Science and Engineering from IIT Kanpur in 2006. His research focusses on developing techniques for visual understanding and reasoning with an eye for efficiency and accuracy. He received the NSF CAREER award in 2018, a best paper honorable mention at CVPR 2018, and a best paper award at WACV 2015. His research is funded by the National Science Foundation, and generous gifts from Adobe, Facebook, and NVIDIA. For more details visit: https://people.cs.umass.edu/~smaji/