Open Access
ARTICLE
How Robust Are Language Models against Backdoors in Federated Learning?
1 Department of Information and Communication Engineering, Chosun University, Gwangju, 61467, Republic of Korea
2 Department of Artificial Intelligence and Software Engineering, Chosun University, Gwangju, 61467, Republic of Korea
3 Department of Artificial Intelligence, Kongju National University, Cheonan, 31080, Republic of Korea
* Corresponding Author: Hyunil Kim. Email:
# These authors contributed equally to this work
Computer Modeling in Engineering & Sciences 2025, 145(2), 2617-2630. https://doi.org/10.32604/cmes.2025.071190
Received 01 August 2025; Accepted 23 October 2025; Issue published 26 November 2025
Abstract
Federated Learning enables privacy-preserving training of Transformer-based language models, but remains vulnerable to backdoor attacks that compromise model reliability. This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks, evaluated across autoencoding and autoregressive models. Unlike prior studies, this work provides the first systematic comparison of perturbation-based, screening-based, and hybrid defenses in Transformer-based FL environments. Our results show that screening-based defenses consistently outperform perturbation-based ones, effectively neutralizing most attacks across architectures. However, this robustness comes with significant computational overhead, revealing a clear trade-off between security and efficiency. By explicitly identifying this trade-off, our study advances the understanding of defense strategies in federated learning and highlights the need for lightweight yet effective screening methods for trustworthy deployment in diverse application domains.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools