Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.071021
Special Issues
Table of Content

Open Access

ARTICLE

Support Vector–Guided Class-Incremental Learning: Discriminative Replay with Dual-Alignment Distillation

Moyi Zhang, Yixin Wang*, Yu Cheng
College of Computer and Artificial Intelligence, Lanzhou University of Technology, Lanzhou, 730050, China
* Corresponding Author: Yixin Wang. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.071021

Received 29 July 2025; Accepted 12 November 2025; Published online 03 December 2025

Abstract

Modern intelligent systems, such as autonomous vehicles and face recognition, must continuously adapt to new scenarios while preserving their ability to handle previously encountered situations. However, when neural networks learn new classes sequentially, they suffer from catastrophic forgetting—the tendency to lose knowledge of earlier classes. This challenge, which lies at the core of class-incremental learning, severely limits the deployment of continual learning systems in real-world applications with streaming data. Existing approaches, including rehearsal-based methods and knowledge distillation techniques, have attempted to address this issue but often struggle to effectively preserve decision boundaries and discriminative features under limited memory constraints. To overcome these limitations, we propose a support vector–guided framework for class-incremental learning. The framework integrates an enhanced feature extractor with a Support Vector Machine classifier, which generates boundary-critical support vectors to guide both replay and distillation. Building on this architecture, we design a joint feature retention strategy that combines boundary proximity with feature diversity, and a Support Vector Distillation Loss that enforces dual alignment in decision and semantic spaces. In addition, triple attention modules are incorporated into the feature extractor to enhance representation power. Extensive experiments on CIFAR-100 and Tiny-ImageNet demonstrate effective improvements. On CIFAR-100 and Tiny-ImageNet with 5 tasks, our method achieves 71.68% and 58.61% average accuracy, outperforming strong baselines by 3.34% and 2.05%. These advantages are consistently observed across different task splits, highlighting the robustness and generalization of the proposed approach. Beyond benchmark evaluations, the framework also shows potential in few-shot and resource-constrained applications such as edge computing and mobile robotics.

Keywords

Class-incremental learning; catastrophic forgetting; support vector machine; knowledge distillation
  • 37

    View

  • 6

    Download

  • 0

    Like

Share Link