Course Objectives
At the end of the course, participants will:
- Understand the security threat landscape
- Understand the top security issues revolving around machine learning
- Understand how to take preventive measures
- Learn about the Secure Development Lifecycle (SDL) for machine learning products
- Gain knowledge on the security considerations when deploying production models
Course Overview
With the explosive growth of machine learning applications and products, the question of their security touchpoint is becoming a major interest area for many organizations. Machine learning security covers both how such products affect the security posture of the organization, and what threats they bring to such a system, as well as how to protect such systems from adversaries. This course outlines the state-of-the-art in machine learning security and how the topic has evolved. It is intended for developers and managers to make strategic decisions for their machine learning products as both a vendor and a customer.
Course Duration
2-day instructor-led training
Course Outlines
- Understanding the ML development flow
- Understanding the difference between training and inference, and the security paradigm for each eco-system
- Understanding ML System Risks & Challenges
- Reviewing the top 10 security threats for ML systems
- Understanding Challenges of ML security
Subjects
- Threat analysis for a machine learning systems
- Security issues in deploying machine learning systems
- IP protection of ML systems
- Weaknesses of machine learning systems
- Current attacks on machine learning systems
- Active areas of ML security research, future, and Q&A session
Overview of Machine Learning Tasks
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
- Deep Learning
- Real-world applications of Machine Learning
- Lab 1: Deep Learning for face recognition
- Machine Learning empirical process
- Theoretical model of Machine Learning
- Application of Machine Learning in Cybersecurity
- Lab 2: Support Vector Machine for IoT malware threat hunting
The Machine Learning Threat Model
- The ML attack surface
- Adversarial capabilities
- Adversarial objectives
- ML threat modelling
- Lab 3: Identifying attack surface and threat modelling of a Deep Learning agent for traffic sign detection
- ML training in adversarial settings
- ML inference in adversarial settings
- ML Differential Privacy and model thefts
- Lab 4: Data exfiltration from a trained Decision Tree model
Adversarial Poisoning Attacks
- Adversarial poisoning attacks methodology
- Poisoning attacks techniques
- Lab 5: Poisoning attacks against a text-recognition Deep Learning agent
Adversarial Evasion Attacks
- Adversarial evasion attacks methodology
- Evasion attacks techniques
- Lab 6: Evading an object recognition Deep Learning agent
Adversarial Attacks on Malware Detection Systems
- Machine learning for malware analysis
- Poisoning and evasion attacks against ML-based malware detection systems
- Lab 7: Evading a deep convolutional neural network agent for malware detection
ML Differential Privacy and Model Thefts
- Foundations of differential privacy
- Data inference and model theft attacks
- Lab 8: Stealing an overfitted machine learning model to extract credit card information
Penetration Testing of ML models
- Penetration testing methodology of ML engines
- Input sanitization of ML models
- Payload injection attacks
- Lab 9: Input sanitization testing of a ML image recognition system using anomaly
- An adversarial learning defense reference model
- Detecting adversarial attacks
- Preventing techniques for adversarial attacks
- Preserving privacy ML models
- The ML deception Defense-in-Depth (ML-DiD) Frameworks
- Lab 10: architecting a defense in depth model in an AI-centered enterprise
Resources