Open Access iconOpen Access

ARTICLE

crossmark

Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes

Qingyu Tan, Yan Li, Byeong-Seok Shin*

Electrical and Computer Engineering, Inha University, Incheon, 22212, Republic of Korea

* Corresponding Author: Byeong-Seok Shin. Email: email

(This article belongs to the Special Issue: Machine learning and Blockchain for AIoT: Robustness, Privacy, Trust and Security)

Computer Modeling in Engineering & Sciences 2025, 143(2), 2417-2428. https://doi.org/10.32604/cmes.2025.063811

Abstract

Federated Learning (FL), a practical solution that leverages distributed data across devices without the need for centralized data storage, which enables multiple participants to jointly train models while preserving data privacy and avoiding direct data sharing. Despite its privacy-preserving advantages, FL remains vulnerable to backdoor attacks, where malicious participants introduce backdoors into local models that are then propagated to the global model through the aggregation process. While existing differential privacy defenses have demonstrated effectiveness against backdoor attacks in FL, they often incur a significant degradation in the performance of the aggregated models on benign tasks. To address this limitation, we propose a novel backdoor defense mechanism based on differential privacy. Our approach first utilizes the inherent out-of-distribution characteristics of backdoor samples to identify and exclude malicious model updates that significantly deviate from benign models. By filtering out models that are clearly backdoor-infected before applying differential privacy, our method reduces the required noise level for differential privacy, thereby enhancing model robustness while preserving performance. Experimental evaluations on the CIFAR10 and FEMNIST datasets demonstrate that our method effectively limits the backdoor accuracy to below 15% across various backdoor scenarios while maintaining high main task accuracy.

Keywords

Federated learning; backdoor attacks; differential privacy; out-of-distribution data

Cite This Article

APA Style
Tan, Q., Li, Y., Shin, B. (2025). Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes. Computer Modeling in Engineering & Sciences, 143(2), 2417–2428. https://doi.org/10.32604/cmes.2025.063811
Vancouver Style
Tan Q, Li Y, Shin B. Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes. Comput Model Eng Sci. 2025;143(2):2417–2428. https://doi.org/10.32604/cmes.2025.063811
IEEE Style
Q. Tan, Y. Li, and B. Shin, “Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes,” Comput. Model. Eng. Sci., vol. 143, no. 2, pp. 2417–2428, 2025. https://doi.org/10.32604/cmes.2025.063811



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1166

    View

  • 794

    Download

  • 0

    Like

Share Link