Paper ID | MLSP-34.3 |
Paper Title |
ON THE ADVERSARIAL ROBUSTNESS OF PRINCIPAL COMPONENT ANALYSIS |
Authors |
Ying Li, Tongji University, China; Fuwei Li, Lifeng Lai, University of California, Davis, United States; Jun Wu, Fudan University, China |
Session | MLSP-34: Subspace Learning and Applications |
Location | Gather.Town |
Session Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation Time: | Thursday, 10 June, 15:30 - 16:15 |
Presentation |
Poster
|
Topic |
Machine Learning for Signal Processing: [MLR-SBML] Subspace and manifold learning |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
In this paper, we investigate the adversarial robustness of principal component analysis (PCA) algorithms. In the considered setup, there is a powerful adversary who can add a carefully designed data point to the original data matrix. The goal of the adversary is to maximize the distance between the subspace learned from the original data and the subspace obtained from the modified data. Different from most of the existing research using Asimov distance to measure such a distance, we leverage a more precise and sophisticated measurement, Chordal distance, which can be used to analyze the influence of an outlier on PCA more comprehensively. Our analysis shows that the first principal angle can be completely changed by an outlier and the second principal angle changes very little. We also demonstrate the performance of our strategy with experimental results on synthetic data and real data. |