Open Access iconOpen Access

ARTICLE

crossmark

AdaptForever: Elastic and Mutual Learning for Continuous NLP Task Mastery

Ke Chen1,2, Cheng Peng1,2, Xinyang He1,2, Jiakang Sun1,2, Xu Liu1,2, Xiaolin Qin1,2,*, Yong Zhong1,2,*

1 Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu, 610213, China
2 University of Chinese Academy of Sciences, Beijing, 101408, China

* Corresponding Authors: Xiaolin Qin. Email: email; Yong Zhong. Email: email

(This article belongs to the Special Issue: Advancements in Natural Language Processing (NLP) and Fuzzy Logic)

Computers, Materials & Continua 2025, 82(3), 4003-4019. https://doi.org/10.32604/cmc.2025.057443

Abstract

In natural language processing (NLP), managing multiple downstream tasks through fine-tuning pre-trained models often requires maintaining separate task-specific models, leading to practical inefficiencies. To address this challenge, we introduce AdaptForever, a novel approach that enables continuous mastery of NLP tasks through the integration of elastic and mutual learning strategies with a stochastic expert mechanism. Our method freezes the pre-trained model weights while incorporating adapters enhanced with mutual learning capabilities, facilitating effective knowledge transfer from previous tasks to new ones. By combining Elastic Weight Consolidation (EWC) for knowledge preservation with specialized regularization terms, AdaptForever successfully maintains performance on earlier tasks while acquiring new capabilities. Experimental results demonstrate that AdaptForever achieves superior performance across a continuous sequence of NLP tasks compared to existing parameter-efficient methods, while effectively preventing catastrophic forgetting and enabling positive knowledge transfer between tasks.

Keywords

Adapter-tuning; large language model; pre-trained language model; parameter-efficient fine tuning; continue learning; mutual learning; mixture of expert

Cite This Article

APA Style
Chen, K., Peng, C., He, X., Sun, J., Liu, X. et al. (2025). Adaptforever: elastic and mutual learning for continuous NLP task mastery. Computers, Materials & Continua, 82(3), 4003–4019. https://doi.org/10.32604/cmc.2025.057443
Vancouver Style
Chen K, Peng C, He X, Sun J, Liu X, Qin X, et al. Adaptforever: elastic and mutual learning for continuous NLP task mastery. Comput Mater Contin. 2025;82(3):4003–4019. https://doi.org/10.32604/cmc.2025.057443
IEEE Style
K. Chen et al., “AdaptForever: Elastic and Mutual Learning for Continuous NLP Task Mastery,” Comput. Mater. Contin., vol. 82, no. 3, pp. 4003–4019, 2025. https://doi.org/10.32604/cmc.2025.057443



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 294

    View

  • 128

    Download

  • 0

    Like

Share Link