Offensive AI: "Some rules can be bent, others can be broken"

Abstract

Adversarial Machine Learning allows us to leverage techniques used by ML algorithm to gauge their weak points and exploiting them. ML is great at identifying and classifying patterns, but an attacker can use the gray areas to influence (or even subvert) the pattern matching algorithms.

Date
Location
Rishon Leziyon, Israel

Can machine learning be used to break/circumvent machine learning?

Machine learning has taken over the world by a storm as computing power has gotten cheaper and cheaper. Running large scale computations is now in the reach of most developers and organizations. Building complex systems based on machine learning algorithms allow specialized classification of inputs - either matching the pattern or not.

However, looking at these many different implementations from a security perspective offer us a unique vantage point - adversarial machine learning. What would happen if an attacker has “intimate” knowledge of the classification algorithm used?

Adversarial Machine Learning is a group of algorithms and techniques that allow an attacker (or a defensive researcher) to estimate the gray areas in the pattern matching abilities of the machine learning algorithm, with exploiting them in mind.

Quoting Morpheus (The Matrix, 1999): “Some rules can be bent, others can be broken.”

As more and more systems (especially security) turn to ML for help to better identify and mitigate threats - learning where the edges of detection lie become more prominent.

References

  1. PassGAN: A Deep Learning Approach for Password Guessing
  2. Adversarial examples for evaluating reading comprehension systems
  3. Universal adversarial perturbations, Video
  4. Awesome-AI-Security
  5. An introduction to Artificial Intelligence
  6. When DNNs go wrong – adversarial examples and what we can learn from them
  7. Machine Learning in the Presence of Adversaries
  8. Pattern Recognition and Applications Lab: Adversarial Machine Learning
  9. Deep neural networks are easily fooled, Nguyen et al., 2015
  10. Practical black-box attacks against deep learning systems using adversarial examples, Papernot et al., 2016
  11. Adversarial examples in the physical world, Goodfellow et al., 2017
  12. Explaining and harnessing adversarial examples, Goodfellow et al., 2015
  13. Distillation as a defense to adversarial perturbations against deep neural networks, Papernot et al., 2016
  14. Vulnerability of deep reinforcement learning to policy induction attacks, Behzadan & Munir, 2017
  15. Adversarial attacks on neural network policies, Huang et al. 2017
  16. Attacking Machine Learning with Adversarial Examples
  17. Intriguing properties of neural networks
  18. Robust Physical-World Attacks on Deep Learning Models
  19. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
  20. Towards the Science of Security and Privacy in Machine Learning
  21. cleverhans sourcecode
  22. Clever Hans
  23. Awesome - Most Cited Deep Learning Papers
  24. 8 Lessons from 20 Years of Hype Cycles
  25. DEF CON 25 (2017) - Weaponizing Machine Learning - Petro, Morris
  26. Evading next-gen AV using A.I.
  27. For better machine-based malware analysis, add a slice of LIME
  28. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Avatar
Guy Barnhart-Magen
Cyber Consultant, BSidesTLV Chairman

Find out more about the services I offer