Open Access iconOpen Access

ARTICLE

crossmark

SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation

Haolang Feng1,2, Yuling Chen1,2,*, Yang Huang1,2, Xuewei Wang3, Haiwei Sang4

1 State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, China
2 College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
3 Computer College, Weifang University of Science and Technology, Weifang, 262700, China
4 School of Mathematics and Big Data, Guizhou Education University, Guiyang, 550018, China

* Corresponding Author: Yuling Chen. Email: email

Computers, Materials & Continua 2025, 84(3), 4469-4490. https://doi.org/10.32604/cmc.2025.064896

Abstract

Deep neural networks remain susceptible to adversarial examples, where the goal of an adversarial attack is to introduce small perturbations to the original examples in order to confuse the model without being easily detected. Although many adversarial attack methods produce adversarial examples that have achieved great results in the white-box setting, they exhibit low transferability in the black-box setting. In order to improve the transferability along the baseline of the gradient-based attack technique, we present a novel Stochastic Gradient Accumulation Momentum Iterative Attack (SAMI-FGSM) in this study. In particular, during each iteration, the gradient information is calculated using a normal sampling approach that randomly samples around the sample points, with the highest probability of capturing adversarial features. Meanwhile, the accumulated information of the sampled gradient from the previous iteration is further considered to modify the current updated gradient, and the original gradient attack direction is changed to ensure that the updated gradient direction is more stable. Comprehensive experiments conducted on the ImageNet dataset show that our method outperforms existing state-of-the-art gradient-based attack techniques, achieving an average improvement of 10.2% in transferability.

Keywords

Adversarial examples; normal sampling; gradient accumulation; adversarial transferability

Cite This Article

APA Style
Feng, H., Chen, Y., Huang, Y., Wang, X., Sang, H. (2025). SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation. Computers, Materials & Continua, 84(3), 4469–4490. https://doi.org/10.32604/cmc.2025.064896
Vancouver Style
Feng H, Chen Y, Huang Y, Wang X, Sang H. SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation. Comput Mater Contin. 2025;84(3):4469–4490. https://doi.org/10.32604/cmc.2025.064896
IEEE Style
H. Feng, Y. Chen, Y. Huang, X. Wang, and H. Sang, “SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation,” Comput. Mater. Contin., vol. 84, no. 3, pp. 4469–4490, 2025. https://doi.org/10.32604/cmc.2025.064896



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1135

    View

  • 657

    Download

  • 0

    Like

Share Link