Open Access iconOpen Access

ARTICLE

crossmark

An Effective Adversarial Defense Framework: From Robust Feature Perspective

Baolin Li1, Tao Hu1,2,3,*, Xinlei Liu1, Jichao Xie1, Peng Yi1,2,3

1 Information Engineering University, Zhengzhou, 450000, China
2 Key Laboratory of Cyberspace Security, Ministry of Education of China, Zhengzhou, 450000, China
3 National Key Laboratory of Advanced Communication Networks, Zhengzhou, 450000, China

* Corresponding Author: Tao Hu. Email: email

Computers, Materials & Continua 2025, 85(1), 2141-2155. https://doi.org/10.32604/cmc.2025.066370

Abstract

Deep neural networks are known to be vulnerable to adversarial attacks. Unfortunately, the underlying mechanisms remain insufficiently understood, leading to empirical defenses that often fail against new attacks. In this paper, we explain adversarial attacks from the perspective of robust features, and propose a novel Generative Adversarial Network (GAN)-based Robust Feature Disentanglement framework (GRFD) for adversarial defense. The core of GRFD is an adversarial disentanglement structure comprising a generator and a discriminator. For the generator, we introduce a novel Latent Variable Constrained Variational Auto-Encoder (LVCVAE), which enhances the typical beta-VAE with a constrained rectification module to enforce explicit clustering of latent variables. To supervise the disentanglement of robust features, we design a Robust Supervisory Model (RSM) as the discriminator, sharing architectural alignment with the target model. The key innovation of RSM is our proposed Feature Robustness Metric (FRM), which serves as part of the training loss and synthesizes the classification ability of features as well as their resistance to perturbations. Extensive experiments on three benchmark datasets demonstrate the superiority of GRFD: it achieves 93.69% adversarial accuracy on MNIST, 77.21% on CIFAR10, and 58.91% on CIFAR100 with minimal degradation in clean accuracy. Codes are available at: (accessed on 23 July 2025).

Keywords

Adversarial defense; robust features; disentanglement; VAE, GAN

Cite This Article

APA Style
Li, B., Hu, T., Liu, X., Xie, J., Yi, P. (2025). An Effective Adversarial Defense Framework: From Robust Feature Perspective. Computers, Materials & Continua, 85(1), 2141–2155. https://doi.org/10.32604/cmc.2025.066370
Vancouver Style
Li B, Hu T, Liu X, Xie J, Yi P. An Effective Adversarial Defense Framework: From Robust Feature Perspective. Comput Mater Contin. 2025;85(1):2141–2155. https://doi.org/10.32604/cmc.2025.066370
IEEE Style
B. Li, T. Hu, X. Liu, J. Xie, and P. Yi, “An Effective Adversarial Defense Framework: From Robust Feature Perspective,” Comput. Mater. Contin., vol. 85, no. 1, pp. 2141–2155, 2025. https://doi.org/10.32604/cmc.2025.066370



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2314

    View

  • 2023

    Download

  • 0

    Like

Share Link