Paper ID | MLSP-39.1 |
Paper Title |
Adversarial Learning via Probabilistic Proximity Analysis |
Authors |
Jarrod Hollis, Jinsub Kim, Raviv Raich, Oregon State University, United States |
Session | MLSP-39: Adversarial Machine Learning |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
We consider the problem of designing a robust classifier in the presence of an adversary who aims to degrade classification performance by elaborately falsifying the test instance. We propose a model-agnostic defense approach wherein the true class label of the falsified instance is inferred by analyzing its proximity to each class as measured based on class-conditional data distributions. We present a k-nearest neighbors type approach to perform a sample-based approximation of the aforementioned probabilistic proximity analysis. The proposed approach is evaluated on three different real-world datasets in a game-theoretic setting, in which the adversary is assumed to optimize the attack design against the employed defense approach. In the game-theoretic evaluation, the proposed defense approach significantly outperforms benchmarks in various attack scenarios, demonstrating its efficacy against optimally designed attacks. |