Open Access iconOpen Access

ARTICLE

Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning

Yongsheng Zhu1,2,*, Chong Liu3,4, Chunlei Chen5, Xiaoting Lyu3,4, Zheng Chen3,4, Bin Wang6, Fuqiang Hu3,4, Hanxi Li3,4, Jiao Dai3,4, Baigen Cai1, Wei Wang3,4

1 School of Automation and Intelligence, Beijing Jiaotong University, Beijing, 100044, China
2 Institute of Computing Technologies, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
3 School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
4 Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, Beijing, 100044, China
5 Institute of Infrastructure Inspection, China Academy of Railway Sciences Corporation Limited, Beijing, 100081, China
6 Zhejiang Key Laboratory of Multi-Dimensional Perception Technology, Application and Cybersecurity, Hangzhou, 310053, China

* Corresponding Author: Yongsheng Zhu. Email: email

(This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)

Computer Modeling in Engineering & Sciences 2024, 141(2), 1305-1325. https://doi.org/10.32604/cmes.2024.054820

Abstract

The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency. Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data. However, despite its privacy benefits, federated learning systems are vulnerable to poisoning attacks, where adversaries alter local model parameters on compromised clients and send malicious updates to the server, potentially compromising the global model’s accuracy. In this study, we introduce PMM (Perturbation coefficient Multiplied by Maximum value), a new poisoning attack method that perturbs model updates layer by layer, demonstrating the threat of poisoning attacks faced by federated learning. Extensive experiments across three distinct datasets have demonstrated PMM’s ability to significantly reduce the global model’s accuracy. Additionally, we propose an effective defense method, namely CLBL (Cluster Layer By Layer). Experiment results on three datasets have confirmed CLBL’s effectiveness.

Keywords


Cite This Article

APA Style
Zhu, Y., Liu, C., Chen, C., Lyu, X., Chen, Z. et al. (2024). Privacy-preserving large-scale AI models for intelligent railway transportation systems: hierarchical poisoning attacks and defenses in federated learning. Computer Modeling in Engineering & Sciences, 141(2), 1305-1325. https://doi.org/10.32604/cmes.2024.054820
Vancouver Style
Zhu Y, Liu C, Chen C, Lyu X, Chen Z, Wang B, et al. Privacy-preserving large-scale AI models for intelligent railway transportation systems: hierarchical poisoning attacks and defenses in federated learning. Comput Model Eng Sci. 2024;141(2):1305-1325 https://doi.org/10.32604/cmes.2024.054820
IEEE Style
Y. Zhu et al., "Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning," Comput. Model. Eng. Sci., vol. 141, no. 2, pp. 1305-1325. 2024. https://doi.org/10.32604/cmes.2024.054820



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 188

    View

  • 66

    Download

  • 0

    Like

Share Link