Paper ID | MLSP-39.2 |
Paper Title |
CLASS AWARE ROBUST TRAINING |
Authors |
Zhikang Xia, Bin Chen, Tao Dai, Shutao Xia, Tsinghua Shenzhen International Graduate School, Tsinghua University, China |
Session | MLSP-39: Adversarial Machine Learning |
Location | Gather.Town |
Session Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation Time: | Friday, 11 June, 11:30 - 12:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-DEEP] Deep learning techniques |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Adversarial training (AT) has been one of the most effective ways to defend adversarial attack. However, existing AT variants exhibit a large imbalanced robust accuracies among different classes, which might harm the robustness of some important class(es) in some real-world applications. For instance, diseased cells are much more important than healthy ones in medical image recognition. Given a certain task, the important class is often a priori. To improve robust accuracy of the important class(es), we are the first to propose a novel adversarial training method with class imbalance taken into account. We term it Class-Aware Robust Training (CART). CART can significantly increase the robustness of the important class(es) by an optional weighted combination of original adversarial example generation and that of the important class. Extensive experiments on three benchmark datasets verify the efficacy of CART for enhancing the robust accuracy of important classes while keeping competitive average robust accuracy. |