Open Access iconOpen Access

ARTICLE

crossmark

How Robust Are Language Models against Backdoors in Federated Learning?

Seunghan Kim1,#, Changhoon Lim2,#, Gwonsang Ryu3, Hyunil Kim2,*

1 Department of Information and Communication Engineering, Chosun University, Gwangju, 61467, Republic of Korea
2 Department of Artificial Intelligence and Software Engineering, Chosun University, Gwangju, 61467, Republic of Korea
3 Department of Artificial Intelligence, Kongju National University, Cheonan, 31080, Republic of Korea

* Corresponding Author: Hyunil Kim. Email: email
# These authors contributed equally to this work

Computer Modeling in Engineering & Sciences 2025, 145(2), 2617-2630. https://doi.org/10.32604/cmes.2025.071190

Abstract

Federated Learning enables privacy-preserving training of Transformer-based language models, but remains vulnerable to backdoor attacks that compromise model reliability. This paper presents a comparative analysis of defense strategies against both classical and advanced backdoor attacks, evaluated across autoencoding and autoregressive models. Unlike prior studies, this work provides the first systematic comparison of perturbation-based, screening-based, and hybrid defenses in Transformer-based FL environments. Our results show that screening-based defenses consistently outperform perturbation-based ones, effectively neutralizing most attacks across architectures. However, this robustness comes with significant computational overhead, revealing a clear trade-off between security and efficiency. By explicitly identifying this trade-off, our study advances the understanding of defense strategies in federated learning and highlights the need for lightweight yet effective screening methods for trustworthy deployment in diverse application domains.

Keywords

Backdoor attack; federated learning; transformer-based language model; system robustness

Cite This Article

APA Style
Kim, S., Lim, C., Ryu, G., Kim, H. (2025). How Robust Are Language Models against Backdoors in Federated Learning?. Computer Modeling in Engineering & Sciences, 145(2), 2617–2630. https://doi.org/10.32604/cmes.2025.071190
Vancouver Style
Kim S, Lim C, Ryu G, Kim H. How Robust Are Language Models against Backdoors in Federated Learning?. Comput Model Eng Sci. 2025;145(2):2617–2630. https://doi.org/10.32604/cmes.2025.071190
IEEE Style
S. Kim, C. Lim, G. Ryu, and H. Kim, “How Robust Are Language Models against Backdoors in Federated Learning?,” Comput. Model. Eng. Sci., vol. 145, no. 2, pp. 2617–2630, 2025. https://doi.org/10.32604/cmes.2025.071190



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 633

    View

  • 252

    Download

  • 0

    Like

Share Link