Find all places we presented at here
Imagine yourself looking through a myriad number of crash dumps trying to find that one exploitable bug that has escaped you for days!
And if that wasn’t difficult enough, the defenders know that they can make us chase ghosts and red herrings, making our lives waaaay more difficult Chaff Bugs: Deterring Attackers by Making Software Buggier
Offensive research is a great field to apply Machine Learning (ML), where pattern matching and insight are often needed at scale. We can leverage ML to accelerate the work of the offensive researcher looking for fuzzing–>crashes–>exploit chains.
Current techniques are built using sets of heuristics. We hypothesized that we can train an ML system to do as well as these heuristics, faster and more accurately.
Machine Learning is not the panacea for every problem, but an exploitable crash has multiple data points (features) that can help us determine its exploitability. The presence of certain primitives on the call stack or the output of libraries and compile-time options like libdislocator, address sanitizer among others, can be indicators of “exploitability”, offering us a path to a greater, more generalized insight.
Defenders can find a lot of value in this work as well, as we can help developers isolate and focus on crashes that will lead to exploitation instead of drudging through countless crashes and analyzing them manually.
In this talk we will explore the current state of the art in ML for offensive exploration and present our ongoing work to automatically categorize and determine the exploitability of crashes and bugs, accelerating the triage process tremendously.
A demo would be shown live on stage (and if the gods permit, a tool released)!