Open Access iconOpen Access

ARTICLE

FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings

Hang Wen1,2, Kai Zeng1,2,*

1 Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
2 Yunnan Key Laboratory of Computer Technologies Application, Kunming, 650500, China

* Corresponding Author: Kai Zeng. Email: email

(This article belongs to the Special Issue: Omnipresent AI in the Cloud Era Reshaping Distributed Computation and Adaptive Systems for Modern Applications)

Computers, Materials & Continua 2026, 86(3), 72 https://doi.org/10.32604/cmc.2025.072909

Abstract

Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity. This problem becomes more severe when edge participation rate is low, as the information collected from different edge devices varies significantly. As a result, communication overhead increases, which further slows down the convergence process. To address this challenge, we propose a simple yet effective federated learning framework that improves consistency among edge devices. The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates. In parallel, a global momentum is applied during model aggregation, enabling faster convergence while preserving personalization. This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training, without introducing extra memory or communication overhead. We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet. The results confirm the effectiveness of our framework. On CIFAR-100, our method reaches 55% accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%. Even under extreme non-IID scenarios, it delivers significant improvements in both accuracy and communication efficiency. The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM (accessed on 20 October 2025).

Graphic Abstract

FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings

Keywords

Federated learning; distributed computation; communication efficient; momentum clustering; non-independent and identically distributed (non-IID)

Cite This Article

APA Style
Wen, H., Zeng, K. (2026). FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings. Computers, Materials & Continua, 86(3), 72. https://doi.org/10.32604/cmc.2025.072909
Vancouver Style
Wen H, Zeng K. FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings. Comput Mater Contin. 2026;86(3):72. https://doi.org/10.32604/cmc.2025.072909
IEEE Style
H. Wen and K. Zeng, “FedCCM: Communication-Efficient Federated Learning via Clustered Client Momentum in Non-IID Settings,” Comput. Mater. Contin., vol. 86, no. 3, pp. 72, 2026. https://doi.org/10.32604/cmc.2025.072909



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 623

    View

  • 204

    Download

  • 0

    Like

Share Link