Overview of Machine Learning Tasks

  • Supervised Learning
  • Unsupervised Learning
  • Reinforcement Learning
  • Deep Learning
  • Real-world applications of Machine Learning
  • Lab 1: Deep Learning for face recognition
  • Machine Learning empirical process
  • Theoretical model of Machine Learning
  • Application of Machine Learning in Cybersecurity
  • Lab 2: Support Vector Machine for IoT malware threat hunting

The Machine Learning Threat Model

  • The ML attack surface
  • Adversarial capabilities
  • Adversarial objectives
  • ML threat modelling
  • Lab 3: Identifying attack surface and threat modelling of a Deep Learning agent for traffic sign detection
  • ML training in adversarial settings
  • ML inference in adversarial settings
  • ML Differential Privacy and model thefts
  • Lab 4: Data exfiltration from a trained Decision Tree model

Adversarial Poisoning Attacks

  • Adversarial poisoning attacks methodology
  • Poisoning attacks techniques
  • Lab 5: Poisoning attacks against a text-recognition Deep Learning agent

Adversarial Evasion Attacks

  • Adversarial evasion attacks methodology
  • Evasion attacks techniques
  • Lab 6: Evading an object recognition Deep Learning agent

Adversarial Attacks on Malware Detection Systems

  • Machine learning for malware analysis
  • Poisoning and evasion attacks against ML-based malware detection systems
  • Lab 7: Evading a deep convolutional neural network agent for malware detection

ML Differential Privacy and Model Thefts

  • Foundations of differential privacy
  • Data inference and model theft attacks
  • Lab 8: Stealing an overfitted machine learning model to extract credit card information

Penetration Testing of ML models

  • Penetration testing methodology of ML engines
  • Input sanitization of ML models
  • Payload injection attacks
  • Lab 9: Input sanitization testing of a ML image recognition system using anomaly
  • An adversarial learning defense reference model
  • Detecting adversarial attacks
  • Preventing techniques for adversarial attacks
  • Preserving privacy ML models
  • The ML deception Defense-in-Depth (ML-DiD) Frameworks
  • Lab 10: architecting a defense in depth model in an AI-centered enterprise