A Survey of Federated Learning: Advances in Architecture, Synchronization, and Security Threats
Faisal Mahmud1, Fahim Mahmud2, Rashedur M. Rahman1,*
1 Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka, 1229, Bangladesh
2 Department of Computer Science and Engineering, Green University of Bangladesh, Purbachal American City, Kanchon, 1460,
Bangladesh
* Corresponding Author: Rashedur M. Rahman. Email: rashedur.rahman@northsouth.edu
Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.073519
Received 19 September 2025; Accepted 21 November 2025; Published online 11 December 2025
Abstract
Federated Learning (FL) has become a leading decentralized solution that enables multiple clients to train a model in a collaborative environment without directly sharing raw data, making it suitable for privacy-sensitive applications such as healthcare, finance, and smart systems. As the field continues to evolve, the research field has become more complex and scattered, covering different system designs, training methods, and privacy techniques. This survey is organized around the three core challenges: how the data is distributed, how models are synchronized, and how to defend against attacks. It provides a structured and up-to-date review of FL research from 2023 to 2025, offering a unified taxonomy that categorizes works by data distribution (Horizontal FL, Vertical FL, Federated Transfer Learning, and Personalized FL), training synchronization (synchronous and asynchronous FL), optimization strategies, and threat models (data leakage and poisoning attacks). In particular, we summarize the latest contributions in Vertical FL frameworks for secure multi-party learning, communication-efficient Horizontal FL, and domain-adaptive Federated Transfer Learning. Furthermore, we examine synchronization techniques addressing system heterogeneity, including straggler mitigation in synchronous FL and staleness management in asynchronous FL. The survey covers security threats in FL, such as gradient inversion, membership inference, and poisoning attacks, as well as their defense strategies that include privacy-preserving aggregation and anomaly detection. The paper concludes by outlining unresolved issues and highlighting challenges in handling personalized models, scalability, and real-world adoption.
Keywords
Federated learning (FL); horizontal federated learning (HFL); vertical federated learning (VFL); federated transfer learning (FTL); personalized federated learning; synchronous federated learning (SFL); asynchronous federated learning (AFL); data leakage; poisoning attacks; privacy-preserving machine learning