Security of Machine Learning Systems

Guy Barnhart-Magen

October 22, 2019

Using Machines to exploit Machines - harnessing AI to accelerate exploitation

abstract

Event/Logo Location Date Links
t2 Helsinky, Finalnd 25 Oct, 2019
BSidesLV Las Vegas, USA 7 Aug, 2019 slides video
Hack in Paris Paris, France 17 Jun, 2019 slides video
SAS Singapore, Singapore 10 Apr, 2019 slides video whitepaper

JARVIS never saw it coming: Hacking machine learning (ML) in speech, text and face recognition - and frankly, everywhere else

abstract

Event/Logo Location Date Links
t2 Helsinky, Finalnd 24 Oct, 2018 slides
44CON London, UK 13 Sep, 2018 slides video
AppSec IL Tel Aviv, Israel 6 Sep, 2018 video
DefCon 303/SkyTalks Las Vegas, USA 12 Aug, 2018 underground talk, no publications
DefCon Crypto & Privacy Village Las Vegas, USA 9 Aug, 2018 slides video
BSidesLV Las Vegas, USA 7 Aug, 2018 slides video
Intel Meetup Series Tel Aviv, Israel 31 Jul, 2018 slides

Intelligent systems, but are they secure?

abstract

Event/Logo Location Date Links
Internet & Mobile World Bucharest, Romania 4 Oct, 2018 slides
Team8 CISO Delegation Herzliya, Israel 18 Jun, 2018 slides

Offensive AI: “Some rules can be bent, others can be broken”

abstract

Algorithmic/Adverserial attacks on machine learning

Event/Logo Location Date Links
AppSec IL Rishon Lezion, Israel 18 Oct, 2017 slides video

Ambiguous Facial Recognition

Academic paper on using ambigous facial recognition to generate codes to use as 2FA

Event/Logo Location Date Links
Optical Engineering Holon, Israel 21 Oct, 2011
Israel Machine Vision Conference Airport City, Israel 28 Jan, 2010

Machine2Machine

Imagine yourself looking through a myriad number of crash dumps trying to find that one exploitable bug that has escaped you for days! And if that wasn’t difficult enough, the defenders know that they can make us chase ghosts and red herrings, making our lives waaaay more difficult Chaff Bugs: Deterring Attackers by Making Software Buggier Offensive research is a great field to apply Machine Learning (ML), where pattern matching and insight are often needed at scale.

We can leverage ML to accelerate the work of the offensive researcher looking for fuzzing–>crashes–>exploit chains. Current techniques are built using sets of heuristics. We hypothesized that we can train an ML system to do as well as these heuristics, faster and more accurately. Machine Learning is not the panacea for every problem, but an exploitable crash has multiple data points (features) that can help us determine its exploitability. The presence of certain primitives on the call stack or the output of libraries and compile-time options like libdislocator, address sanitizer among others, can be indicators of “exploitability”, offering us a path to a greater, more generalized insight. A demo would be shown live on stage (and if the gods permit, a tool released)!

Jarvis

Exploits, Backdoors, and Hacks: words we do not commonly hear when speaking of Machine Learning (ML). In this talk, I will present the relatively new field of hacking and manipulate machine learning systems and the potential these techniques pose for active offensive research. The study of Adversarial ML allows us to leverage the techniques used by these algorithms to find weak points and exploit them in order to achieve:

In other words, while ML is great at identifying and classifying patterns, an attacker can take advantage of this and take control of the system. This talk is an extension of research made by many people, including presenters at DefCon, CCC, and others – a live demo will be shown on stage!

Garbage In, RCE Out :-)

Intelligent Systems

Artificial Intelligence (AI) is the newest addition to a crowded IT toolset. In this talk we will explore how intelligent systems add new attack surfaces to the organization, new attack methods and the targets attackers pursue in the AI landscape.

Takeaways

Offensive AI

Offensive AI allows us to leverage techniques used by ML algorithm to gauge their weak points and exploiting them. ML is great at identifying and classifying patterns, but an attacker can use the gray areas to influence (or even subvert) the pattern matching algorithms.