Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (20)
  • Open Access

    ARTICLE

    SAMI-FGSM: Towards Transferable Attacks with Stochastic Gradient Accumulation

    Haolang Feng1,2, Yuling Chen1,2,*, Yang Huang1,2, Xuewei Wang3, Haiwei Sang4

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4469-4490, 2025, DOI:10.32604/cmc.2025.064896 - 30 July 2025

    Abstract Deep neural networks remain susceptible to adversarial examples, where the goal of an adversarial attack is to introduce small perturbations to the original examples in order to confuse the model without being easily detected. Although many adversarial attack methods produce adversarial examples that have achieved great results in the white-box setting, they exhibit low transferability in the black-box setting. In order to improve the transferability along the baseline of the gradient-based attack technique, we present a novel Stochastic Gradient Accumulation Momentum Iterative Attack (SAMI-FGSM) in this study. In particular, during each iteration, the gradient information More >

  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

  • Open Access

    ARTICLE

    Exploratory Research on Defense against Natural Adversarial Examples in Image Classification

    Yaoxuan Zhu, Hua Yang, Bin Zhu*

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 1947-1968, 2025, DOI:10.32604/cmc.2024.057866 - 17 February 2025

    Abstract The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common… More >

  • Open Access

    ARTICLE

    An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection

    Younghoon Ban, Myeonghyun Kim, Haehyun Cho*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3535-3563, 2024, DOI:10.32604/cmes.2023.046658 - 11 March 2024

    Abstract Antivirus vendors and the research community employ Machine Learning (ML) or Deep Learning (DL)-based static analysis techniques for efficient identification of new threats, given the continual emergence of novel malware variants. On the other hand, numerous researchers have reported that Adversarial Examples (AEs), generated by manipulating previously detected malware, can successfully evade ML/DL-based classifiers. Commercial antivirus systems, in particular, have been identified as vulnerable to such AEs. This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers. Our attack method utilizes seven different perturbations, including Overlay Append, Section Append, and Break Checksum,… More >

  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

  • Open Access

    ARTICLE

    An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

    Weizheng Wang1,3, Xiangqi Wang2,*, Xianmin Pan1, Xingxing Gong3, Jian Liang3, Pradip Kumar Sharma4, Osama Alfarraj5, Wael Said6

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3859-3876, 2023, DOI:10.32604/cmc.2023.041346 - 08 October 2023

    Abstract Image-denoising techniques are widely used to defend against Adversarial Examples (AEs). However, denoising alone cannot completely eliminate adversarial perturbations. The remaining perturbations tend to amplify as they propagate through deeper layers of the network, leading to misclassifications. Moreover, image denoising compromises the classification accuracy of original examples. To address these challenges in AE defense through image denoising, this paper proposes a novel AE detection technique. The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network (CNN) network structures. The used detector model integrates the classification results of different models as the input to… More >

  • Open Access

    ARTICLE

    Instance Reweighting Adversarial Training Based on Confused Label

    Zhicong Qiu1,2, Xianmin Wang1,*, Huawei Ma1, Songcao Hou1, Jing Li1,2,*, Zuoyong Li2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1243-1256, 2023, DOI:10.32604/iasc.2023.038241 - 21 June 2023

    Abstract Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks, which lies in the fact that examples closer to the decision boundaries are much more vulnerable to being attacked and should be given larger weights. The probability margin (PM) method is a promising approach to continuously and path-independently measuring such closeness between the example and decision boundary. However, the performance of PM is limited due to the fact that PM fails to effectively distinguish the examples having only one misclassified category and the ones with multiple misclassified categories, where… More >

  • Open Access

    ARTICLE

    Adversarial Examples Protect Your Privacy on Speech Enhancement System

    Mingyu Dong, Diqun Yan*, Rangding Wang

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 1-12, 2023, DOI:10.32604/csse.2023.034568 - 20 January 2023

    Abstract Speech is easily leaked imperceptibly. When people use their phones, the personal voice assistant is constantly listening and waiting to be activated. Private content in speech may be maliciously extracted through automatic speech recognition (ASR) technology by some applications on phone devices. To guarantee that the recognized speech content is accurate, speech enhancement technology is used to denoise the input speech. Speech enhancement technology has developed rapidly along with deep neural networks (DNNs), but adversarial examples can cause DNNs to fail. Considering that the vulnerability of DNN can be used to protect the privacy in… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810 - 19 July 2022

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative… More >

  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006 - 17 June 2022

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the More >

Displaying 1-10 on page 1 of 20. Per Page