Open Access
ARTICLE
Differential Privacy Federated Learning Based on Adaptive Adjustment
1 The State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China
2 The School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, 100876, China
* Corresponding Author: Wenmin Li. Email:
Computers, Materials & Continua 2025, 82(3), 4777-4795. https://doi.org/10.32604/cmc.2025.060380
Received 30 October 2024; Accepted 19 December 2024; Issue published 06 March 2025
Abstract
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture. Existing research has shown that attackers can compromise user privacy and security by stealing model parameters. Therefore, differential privacy is applied in federated learning to further address malicious issues. However, the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization. Therefore, we propose an adaptive adjusted differential privacy federated learning method. First, a dynamic adaptive privacy budget allocation strategy is proposed, which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements, thereby alleviating the loss of privacy budget and the magnitude of model noise. Second, a longitudinal clipping differential privacy strategy is proposed, which based on the differences in factors that affect parameter updates, uses sparse methods to trim local updates, thereby reducing the impact of privacy pruning steps on model accuracy. The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced. To evaluate the effectiveness of our method, we conducted extensive experiments on benchmark datasets, and the results showed that our proposed method performed well in terms of performance and privacy protection.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.