Using Machines to exploit Machines - harnessing AI to accelerate exploitation
|Helsinky, Finalnd||25 Oct, 2019|
|Las Vegas, USA||7 Aug, 2019||slides video|
|Paris, France||17 Jun, 2019||slides video|
|Singapore, Singapore||10 Apr, 2019||slides video whitepaper|
JARVIS never saw it coming: Hacking machine learning (ML) in speech, text and face recognition - and frankly, everywhere else
|Helsinky, Finalnd||24 Oct, 2018||slides|
|London, UK||13 Sep, 2018||slides video|
|Tel Aviv, Israel||6 Sep, 2018||video|
|Las Vegas, USA||12 Aug, 2018||underground talk, no publications|
|Las Vegas, USA||9 Aug, 2018||slides video|
|Las Vegas, USA||7 Aug, 2018||slides video|
|Tel Aviv, Israel||31 Jul, 2018||slides|
Intelligent systems, but are they secure?
|Bucharest, Romania||4 Oct, 2018||slides|
|Herzliya, Israel||18 Jun, 2018||slides|
Offensive AI: “Some rules can be bent, others can be broken”
Algorithmic/Adverserial attacks on machine learning
|Rishon Lezion, Israel||18 Oct, 2017||slides video|
Ambiguous Facial Recognition
Academic paper on using ambigous facial recognition to generate codes to use as 2FA
|Holon, Israel||21 Oct, 2011|
|Airport City, Israel||28 Jan, 2010|
Imagine yourself looking through a myriad number of crash dumps trying to find that one exploitable bug that has escaped you for days! And if that wasn’t difficult enough, the defenders know that they can make us chase ghosts and red herrings, making our lives waaaay more difficult Chaff Bugs: Deterring Attackers by Making Software Buggier Offensive research is a great field to apply Machine Learning (ML), where pattern matching and insight are often needed at scale.
We can leverage ML to accelerate the work of the offensive researcher looking for fuzzing–>crashes–>exploit chains. Current techniques are built using sets of heuristics. We hypothesized that we can train an ML system to do as well as these heuristics, faster and more accurately. Machine Learning is not the panacea for every problem, but an exploitable crash has multiple data points (features) that can help us determine its exploitability. The presence of certain primitives on the call stack or the output of libraries and compile-time options like libdislocator, address sanitizer among others, can be indicators of “exploitability”, offering us a path to a greater, more generalized insight. A demo would be shown live on stage (and if the gods permit, a tool released)!
Exploits, Backdoors, and Hacks: words we do not commonly hear when speaking of Machine Learning (ML). In this talk, I will present the relatively new field of hacking and manipulate machine learning systems and the potential these techniques pose for active offensive research. The study of Adversarial ML allows us to leverage the techniques used by these algorithms to find weak points and exploit them in order to achieve:
- Unexpected consequences (why did it decide this rifle is a banana?)
- Data leakage (how did they know Joe has diabetes)
- Memory corruption and other exploitation techniques (boom! RCE)
- Influence the output
In other words, while ML is great at identifying and classifying patterns, an attacker can take advantage of this and take control of the system. This talk is an extension of research made by many people, including presenters at DefCon, CCC, and others – a live demo will be shown on stage!
Garbage In, RCE Out :-)
Artificial Intelligence (AI) is the newest addition to a crowded IT toolset. In this talk we will explore how intelligent systems add new attack surfaces to the organization, new attack methods and the targets attackers pursue in the AI landscape.
- Start having conversations about Security and AI
- Machine learning needs to be protected against attackers
- Checks and balances, don’t trust blindly
Offensive AI allows us to leverage techniques used by ML algorithm to gauge their weak points and exploiting them. ML is great at identifying and classifying patterns, but an attacker can use the gray areas to influence (or even subvert) the pattern matching algorithms.