Open Access iconOpen Access

ARTICLE

Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework

Zuyi Huang1, Yue Wang1, Jia Liu2, Haodong Yi1, Lejun Ai1, Min Chen1,3,*, Salman A. AlQahtani4

1 School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
2 School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
3 Pazhou Laboratory, Guangzhou, 510640, China
4 New Emerging Technologies and 5G Network and Beyond Research Chair, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11574, Saudi Arabia

* Corresponding Author: Min Chen. Email: email

(This article belongs to the Special Issue: Advancing Network Intelligence: Communication, Sensing and Computation)

Computers, Materials & Continua 2026, 87(1), 66 https://doi.org/10.32604/cmc.2025.074034

Abstract

The personalized fine-tuning of large language models (LLMs) on edge devices is severely constrained by limited computation resources. Although split federated learning alleviates on-device burdens, its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning, which leads to excessive communication overhead. To address this, we propose Language-Empowered Split Fine-Tuning (LESFT), a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm. LESFT simultaneously learns from multiple logically equivalent but linguistically diverse reasoning chains, providing richer supervisory signals and improving data efficiency. This process-oriented training allows more effective reasoning adaptation with fewer samples. Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy. LESFT consistently outperforms strong baselines on GSM8K, CommonsenseQA, and AQUA_RAT, with the largest gains observed on Qwen2.5-3B. These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.

Keywords

Large language models; edge computing; efficient fine-tuning; few-shot fine-tuning; split federated learning

Cite This Article

APA Style
Huang, Z., Wang, Y., Liu, J., Yi, H., Ai, L. et al. (2026). Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework. Computers, Materials & Continua, 87(1), 66. https://doi.org/10.32604/cmc.2025.074034
Vancouver Style
Huang Z, Wang Y, Liu J, Yi H, Ai L, Chen M, et al. Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework. Comput Mater Contin. 2026;87(1):66. https://doi.org/10.32604/cmc.2025.074034
IEEE Style
Z. Huang et al., “Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework,” Comput. Mater. Contin., vol. 87, no. 1, pp. 66, 2026. https://doi.org/10.32604/cmc.2025.074034



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 229

    View

  • 38

    Download

  • 0

    Like

Share Link