Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (39)
  • Open Access

    ARTICLE

    Gradient-Guided Assembly Instruction Relocation for Adversarial Attacks Against Binary Code Similarity Detection

    Ran Wei*, Hui Shu

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069562 - 10 November 2025

    Abstract Transformer-based models have significantly advanced binary code similarity detection (BCSD) by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings. Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code, existing techniques predominantly depend on inserting artificial instructions, which incur high computational costs and offer limited diversity of perturbations. To address these limitations, we propose AIMA, a novel gradient-guided assembly instruction relocation method. Our method decouples the detection model into tokenization, embedding, and encoding layers to enable efficient gradient computation. Since token IDs of instructions are… More >

  • Open Access

    ARTICLE

    A Novel Unsupervised Structural Attack and Defense for Graph Classification

    Yadong Wang1, Zhiwei Zhang1,*, Pengpeng Qiao2, Ye Yuan1, Guoren Wang1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-22, 2026, DOI:10.32604/cmc.2025.068590 - 10 November 2025

    Abstract Graph Neural Networks (GNNs) have proven highly effective for graph classification across diverse fields such as social networks, bioinformatics, and finance, due to their capability to learn complex graph structures. However, despite their success, GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy. Existing adversarial attack strategies primarily rely on label information to guide the attacks, which limits their applicability in scenarios where such information is scarce or unavailable. This paper introduces an innovative unsupervised attack method for graph classification, which operates without relying on label information, thereby enhancing its applicability… More >

  • Open Access

    ARTICLE

    AMA: Adaptive Multimodal Adversarial Attack with Dynamic Perturbation Optimization

    Yufei Shi, Ziwen He*, Teng Jin, Haochen Tong, Zhangjie Fu

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.2, pp. 1831-1848, 2025, DOI:10.32604/cmes.2025.067658 - 31 August 2025

    Abstract This article proposes an innovative adversarial attack method, AMA (Adaptive Multimodal Attack), which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength. Specifically, AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency. Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95% to 89.56% on visual question answering and from 78.82% to 84.96% on visual reasoning tasks across representative vision-language benchmarks. These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual More >

  • Open Access

    ARTICLE

    DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

    Xiaohan Chen1, Yuanfang Chen1,*, Gyu Myoung Lee2, Noel Crespi3, Pierluigi Siano4

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1733-1750, 2025, DOI:10.32604/cmc.2025.067284 - 29 August 2025

    Abstract Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused… More >

  • Open Access

    ARTICLE

    A Black-Box Speech Adversarial Attack Method Based on Enhanced Neural Predictors in Industrial IoT

    Yun Zhang, Zhenhua Yu*, Xufei Hu, Xuya Cong, Ou Ye

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5403-5426, 2025, DOI:10.32604/cmc.2025.067120 - 30 July 2025

    Abstract Devices in Industrial Internet of Things are vulnerable to voice adversarial attacks. Studying adversarial speech samples is crucial for enhancing the security of automatic speech recognition systems in Industrial Internet of Things devices. Current black-box attack methods often face challenges such as complex search processes and excessive perturbation generation. To address these issues, this paper proposes a black-box voice adversarial attack method based on enhanced neural predictors. This method searches for minimal perturbations in the perturbation space, employing an optimization process guided by a self-attention neural predictor to identify the optimal perturbation direction. This direction… More >

  • Open Access

    ARTICLE

    Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing

    Hyeong-Gyeong Kim1, Sang-Min Choi2, Hyeon Seo2, Suwon Lee2,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4381-4397, 2025, DOI:10.32604/cmc.2025.067024 - 30 July 2025

    Abstract Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed… More >

  • Open Access

    ARTICLE

    DEMGAN: A Machine Learning-Based Intrusion Detection System Evasion Scheme

    Dawei Xu1,2,3, Yue Lv1, Min Wang1, Baokun Zheng4,*, Jian Zhao1,3, Jiaxuan Yu5

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1731-1746, 2025, DOI:10.32604/cmc.2025.064833 - 09 June 2025

    Abstract Network intrusion detection systems (IDS) are a prevalent method for safeguarding network traffic against attacks. However, existing IDS primarily depend on machine learning (ML) models, which are vulnerable to evasion through adversarial examples. In recent years, the Wasserstein Generative Adversarial Network (WGAN), based on Wasserstein distance, has been extensively utilized to generate adversarial examples. Nevertheless, several challenges persist: (1) WGAN experiences the mode collapse problem when generating multi-category network traffic data, leading to subpar quality and insufficient diversity in the generated data; (2) Due to unstable training processes, the authenticity of the data produced by… More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

  • Open Access

    ARTICLE

    Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition

    Donghyeok Park1, Sumin Yeon2, Hyeon Seo2, Seok-Jun Buu2, Suwon Lee2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.3, pp. 2725-2737, 2025, DOI:10.32604/cmes.2025.061732 - 03 March 2025

    Abstract Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of More >

Displaying 1-10 on page 1 of 39. Per Page