Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation

    Haolang Feng1,2, Yuling Chen1,2,*, Yang Huang1,2, Xuewei Wang3, Haiwei Sang4

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4469-4490, 2025, DOI:10.32604/cmc.2025.064896 - 30 July 2025

    Abstract Deep neural networks remain susceptible to adversarial examples, where the goal of an adversarial attack is to introduce small perturbations to the original examples in order to confuse the model without being easily detected. Although many adversarial attack methods produce adversarial examples that have achieved great results in the white-box setting, they exhibit low transferability in the black-box setting. In order to improve the transferability along the baseline of the gradient-based attack technique, we present a novel Stochastic Gradient Accumulation Momentum Iterative Attack (SAMI-FGSM) in this study. In particular, during each iteration, the gradient information More >

  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

  • Open Access

    ARTICLE

    Enhancing the Adversarial Transferability with Channel Decomposition

    Bin Lin1, Fei Gao2, Wenli Zeng3,*, Jixin Chen4, Cong Zhang5, Qinsheng Zhu6, Yong Zhou4, Desheng Zheng4, Qian Qiu7,5, Shan Yang8

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3075-3085, 2023, DOI:10.32604/csse.2023.034268 - 03 April 2023

    Abstract The current adversarial attacks against deep learning models have achieved incredible success in the white-box scenario. However, they often exhibit weak transferability in the black-box scenario, especially when attacking those with defense mechanisms. In this work, we propose a new transfer-based black-box attack called the channel decomposition attack method (CDAM). It can attack multiple black-box models by enhancing the transferability of the adversarial examples. On the one hand, it tunes the gradient and stabilizes the update direction by decomposing the channels of the input example and calculating the aggregate gradient. On the other hand, it More >

Displaying 1-10 on page 1 of 4. Per Page