Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.074034
Special Issues
Table of Content

Open Access

ARTICLE

Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework

Zuyi Huang1, Yue Wang1, Jia Liu2, Haodong Yi1, Lejun Ai1, Min Chen1,3,*, Salman A. AlQahtani4
1 School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
2 School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
3 Pazhou Laboratory, Guangzhou, 510640, China
4 New Emerging Technologies and 5G Network and Beyond Research Chair, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11574, Saudi Arabia
* Corresponding Author: Min Chen. Email: email
(This article belongs to the Special Issue: Advancing Network Intelligence: Communication, Sensing and Computation)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.074034

Received 30 September 2025; Accepted 05 December 2025; Published online 29 December 2025

Abstract

The personalized fine-tuning of large language models (LLMs) on edge devices is severely constrained by limited computation resources. Although split federated learning alleviates on-device burdens, its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning, which leads to excessive communication overhead. To address this, we propose Language-Empowered Split Fine-Tuning (LESFT), a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm. LESFT simultaneously learns from multiple logically equivalent but linguistically diverse reasoning chains, providing richer supervisory signals and improving data efficiency. This process-oriented training allows more effective reasoning adaptation with fewer samples. Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy. LESFT consistently outperforms strong baselines on GSM8K, CommonsenseQA, and AQUA_RAT, with the largest gains observed on Qwen2.5-3B. These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.

Keywords

Large language models; edge computing; efficient fine-tuning; few-shot fine-tuning; split federated learning
  • 105

    View

  • 18

    Download

  • 0

    Like

Share Link