Paper ID | MLSP-39.6 |
Paper Title |
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff |
Authors |
Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, University of Maryland, College Park, United States; Jonas Geiping, University of Siegen, Germany; Micah Goldblum, Tom Goldstein, Arjun Gupta, University of Maryland, College Park, United States |
Session | MLSP-39: Adversarial Machine Learning |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Data poisoning and backdoor attacks manipulate victim models by maliciously modifying training data. In light of this growing threat, a recent survey of industry professionals revealed heightened fear in the private sector regarding data poisoning. Many previous defenses against poisoning either fail in the face of increasingly strong attacks, or they significantly degrade performance. However, we find that strong data augmentations, such as mixup and CutMix, can significantly diminish the threat of poisoning and backdoor attacks without trading off performance. We further verify the effectiveness of this simple defense against adaptive poisoning methods, and we compare to baselines including the popular differentially private SGD (DP-SGD) defense. In the context of backdoors, CutMix greatly mitigates the attack while simultaneously increasing validation accuracy by 9%. |