Windows Defender ATP: Building resiliency against adversarial attacks on machine learning

Just like humans are susceptible to social engineering, machines are susceptible to tampering, which makes machine learning vulnerable to adversarial attacks. Researchers have been able to successfully attack deep learning models used to classify malware to completely change their predictions by only accessing the output label of the model for the input samples fed by the attacker. Moreover, we’ve also seen attackers attempting to poison our training data for ML models by sending fake telemetry and trying to fool the classifier into believing that a given set of malware samples are actually benign. How do we detect and protect against such attacks? Is there a way we can make our models more robust to future attacks?

This session will cover several strategies we used to make machine learning models for Windows Defender more tamper resilient.

 

Artificial Intelligence, Data Scientists (i.e people working with advanced analytics), Modern Workplace, Security