Adversarial Machine Learning allows us to leverage techniques used by ML algorithm to gauge their weak points and exploiting them. ML is great at identifying and classifying patterns, but an attacker can use the gray areas to influence (or even subvert) the pattern matching algorithms.
Can machine learning be used to break/circumvent machine learning?
Machine learning has taken over the world by a storm as computing power has gotten cheaper and cheaper. Running large scale computations is now in the reach of most developers and organizations. Building complex systems based on machine learning algorithms allow specialized classification of inputs - either matching the pattern or not.
However, looking at these many different implementations from a security perspective offer us a unique vantage point - adversarial machine learning. What would happen if an attacker has “intimate” knowledge of the classification algorithm used?
Adversarial Machine Learning is a group of algorithms and techniques that allow an attacker (or a defensive researcher) to estimate the gray areas in the pattern matching abilities of the machine learning algorithm, with exploiting them in mind.
Quoting Morpheus (The Matrix, 1999): “Some rules can be bent, others can be broken.”
As more and more systems (especially security) turn to ML for help to better identify and mitigate threats - learning where the edges of detection lie become more prominent.
- PassGAN: A Deep Learning Approach for Password Guessing
- Adversarial examples for evaluating reading comprehension systems
- Universal adversarial perturbations, Video
- An introduction to Artificial Intelligence
- When DNNs go wrong – adversarial examples and what we can learn from them
- Machine Learning in the Presence of Adversaries
- Pattern Recognition and Applications Lab: Adversarial Machine Learning
- Deep neural networks are easily fooled, Nguyen et al., 2015
- Practical black-box attacks against deep learning systems using adversarial examples, Papernot et al., 2016
- Adversarial examples in the physical world, Goodfellow et al., 2017
- Explaining and harnessing adversarial examples, Goodfellow et al., 2015
- Distillation as a defense to adversarial perturbations against deep neural networks, Papernot et al., 2016
- Vulnerability of deep reinforcement learning to policy induction attacks, Behzadan & Munir, 2017
- Adversarial attacks on neural network policies, Huang et al. 2017
- Attacking Machine Learning with Adversarial Examples
- Intriguing properties of neural networks
- Robust Physical-World Attacks on Deep Learning Models
- Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
- Towards the Science of Security and Privacy in Machine Learning
- cleverhans sourcecode
- Clever Hans
- Awesome - Most Cited Deep Learning Papers
- 8 Lessons from 20 Years of Hype Cycles
- DEF CON 25 (2017) - Weaponizing Machine Learning - Petro, Morris
- Evading next-gen AV using A.I.
- For better machine-based malware analysis, add a slice of LIME
- BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain