Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (45)
  • Open Access

    ARTICLE

    Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering

    Cheng Yang, Xianghong Tang*, Jianguang Lu, Chaobin Wang

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.076126 - 09 April 2026

    Abstract Graph neural networks (GNNs) have demonstrated impressive capabilities in processing graph-structured data, yet their vulnerability to adversarial perturbations poses serious challenges to real-world applications. Existing defense methods often fail to handle diverse types of attacks and adapt to dynamic adversarial strategies because they typically rely on static defense mechanisms or focus narrowly on a single robustness dimension. To address these limitations, we propose an adversarial attention-based robustness strategy (AARS), which is a unified framework designed to enhance the robustness of GNNs against structural and feature perturbations. AARS operates in two stages: the first stage employs More >

  • Open Access

    REVIEW

    Cybersecurity Opportunities and Risks of Artificial Intelligence in Industrial Control Systems: A Survey

    Ka-Kyung Kim, Joon-Seok Kim, Dong-Hyuk Shin, Ieck-Chae Euom*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.2, 2026, DOI:10.32604/cmes.2026.077315 - 26 February 2026

    Abstract As attack techniques evolve and data volumes increase, the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential. Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems. However, its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm. To address these concerns, this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments, examining both their benefits and associated… More > Graphic Abstract

    Cybersecurity Opportunities and Risks of Artificial Intelligence in Industrial Control Systems: A Survey

  • Open Access

    ARTICLE

    AdvYOLO: An Improved Cross-Conv-Block Feature Fusion-Based YOLO Network for Transferable Adversarial Attacks on ORSIs Object Detection

    Leyu Dai1,2,3, Jindong Wang1,2,3, Ming Zhou1,2,3, Song Guo1,2,3, Hengwei Zhang1,2,3,*

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.072449 - 10 February 2026

    Abstract In recent years, with the rapid advancement of artificial intelligence, object detection algorithms have made significant strides in accuracy and computational efficiency. Notably, research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images (ORSIs). However, in the realm of adversarial attacks, developing adversarial techniques tailored to Anchor-Free models remains challenging. Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures. Furthermore, the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks. This study presents… More >

  • Open Access

    ARTICLE

    Secured-FL: Blockchain-Based Defense against Adversarial Attacks on Federated Learning Models

    Bello Musa Yakubu1,*, Nor Shahida Mohd Jamail 2, Rabia Latif 2, Seemab Latif 3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072426 - 12 January 2026

    Abstract Federated Learning (FL) enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection. This work proposes Secured-FL, a blockchain-based defensive framework that combines smart contract–based authentication, clustering-driven outlier elimination, and dynamic threshold adjustment to defend against adversarial attacks. The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates. Large-scale simulation on the Cyber Data dataset, under up to 50% malicious client settings, demonstrates Secured-FL achieves 6%–12% higher accuracy, More >

  • Open Access

    REVIEW

    From Identification to Obfuscation: A Survey of Cross-Network Mapping and Anti-Mapping Methods

    Shaojie Min1, Yaxiao Luo1, Kebing Liu1, Qingyuan Gong2, Yang Chen1,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-23, 2026, DOI:10.32604/cmc.2025.073175 - 09 December 2025

    Abstract User identity linkage (UIL) across online social networks seeks to match accounts belonging to the same real-world individual. This cross-platform mapping enables accurate user modeling but also raises serious privacy risks. Over the past decade, the research community has developed a wide range of UIL methods, from structural embeddings to multimodal fusion architectures. However, corresponding adversarial and defensive approaches remain fragmented and comparatively understudied. In this survey, we provide a unified overview of both mapping and anti-mapping methods for UIL. We categorize representative mapping models by learning paradigm and data modality, and systematically compare them… More >

  • Open Access

    ARTICLE

    X-MalNet: A CNN-Based Malware Detection Model with Visual and Structural Interpretability

    Kirubavathi Ganapathiyappan1, Heba G. Mohamed2, Abhishek Yadav1, Guru Akshya Chinnaswamy1, Ateeq Ur Rehman3,*, Habib Hamam4,5,6,7

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-18, 2026, DOI:10.32604/cmc.2025.069951 - 09 December 2025

    Abstract The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques, which are often unable to adapt to rapidly evolving attack patterns. To address these challenges, this study proposes X-MalNet, a lightweight Convolutional Neural Network (CNN) framework designed for static malware classification through image-based representations of binary executables. By converting malware binaries into grayscale images, the model extracts distinctive structural and texture-level features that signify malicious intent, thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis. Built upon a modified AlexNet architecture, X-MalNet employs transfer learning to… More >

  • Open Access

    ARTICLE

    Gradient-Guided Assembly Instruction Relocation for Adversarial Attacks Against Binary Code Similarity Detection

    Ran Wei*, Hui Shu

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069562 - 10 November 2025

    Abstract Transformer-based models have significantly advanced binary code similarity detection (BCSD) by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings. Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code, existing techniques predominantly depend on inserting artificial instructions, which incur high computational costs and offer limited diversity of perturbations. To address these limitations, we propose AIMA, a novel gradient-guided assembly instruction relocation method. Our method decouples the detection model into tokenization, embedding, and encoding layers to enable efficient gradient computation. Since token IDs of instructions are… More >

  • Open Access

    ARTICLE

    A Novel Unsupervised Structural Attack and Defense for Graph Classification

    Yadong Wang1, Zhiwei Zhang1,*, Pengpeng Qiao2, Ye Yuan1, Guoren Wang1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-22, 2026, DOI:10.32604/cmc.2025.068590 - 10 November 2025

    Abstract Graph Neural Networks (GNNs) have proven highly effective for graph classification across diverse fields such as social networks, bioinformatics, and finance, due to their capability to learn complex graph structures. However, despite their success, GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy. Existing adversarial attack strategies primarily rely on label information to guide the attacks, which limits their applicability in scenarios where such information is scarce or unavailable. This paper introduces an innovative unsupervised attack method for graph classification, which operates without relying on label information, thereby enhancing its applicability… More >

  • Open Access

    ARTICLE

    AMA: Adaptive Multimodal Adversarial Attack with Dynamic Perturbation Optimization

    Yufei Shi, Ziwen He*, Teng Jin, Haochen Tong, Zhangjie Fu

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.2, pp. 1831-1848, 2025, DOI:10.32604/cmes.2025.067658 - 31 August 2025

    Abstract This article proposes an innovative adversarial attack method, AMA (Adaptive Multimodal Attack), which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength. Specifically, AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency. Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95% to 89.56% on visual question answering and from 78.82% to 84.96% on visual reasoning tasks across representative vision-language benchmarks. These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual More >

  • Open Access

    ARTICLE

    DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

    Xiaohan Chen1, Yuanfang Chen1,*, Gyu Myoung Lee2, Noel Crespi3, Pierluigi Siano4

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1733-1750, 2025, DOI:10.32604/cmc.2025.067284 - 29 August 2025

    Abstract Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused… More >

Displaying 1-10 on page 1 of 45. Per Page