Paper ID | MLSP-12.3 | ||
Paper Title | A QUANTITATIVE METRIC FOR PRIVACY LEAKAGE IN FEDERATED LEARNING | ||
Authors | Yong Liu, National University of Singapore, China; Xinghua Zhu, Jianzong Wang, Jing Xiao, Ping An Technology (Shenzhen) Co., Ltd., China | ||
Session | MLSP-12: Federated Learning 1 | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation Time: | Wednesday, 09 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Machine Learning for Signal Processing: [MLR-DFED] Distributed/Federated learning | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution. |