TY - EJOU AU - Wen, Hang AU - Zeng, Kai TI - FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings T2 - Computers, Materials \& Continua PY - 2026 VL - 86 IS - 3 SN - 1546-2226 AB - Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity. This problem becomes more severe when edge participation rate is low, as the information collected from different edge devices varies significantly. As a result, communication overhead increases, which further slows down the convergence process. To address this challenge, we propose a simple yet effective federated learning framework that improves consistency among edge devices. The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates. In parallel, a global momentum is applied during model aggregation, enabling faster convergence while preserving personalization. This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training, without introducing extra memory or communication overhead. We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet. The results confirm the effectiveness of our framework. On CIFAR-100, our method reaches 55% accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%. Even under extreme non-IID scenarios, it delivers significant improvements in both accuracy and communication efficiency. The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM (accessed on 20 October 2025). KW - Federated learning; distributed computation; communication efficient; momentum clustering; non-independent and identically distributed (non-IID) DO - 10.32604/cmc.2025.072909