Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (28)
  • Open Access

    REVIEW

    Cybersecurity Opportunities and Risks of Artificial Intelligence in Industrial Control Systems: A Survey

    Ka-Kyung Kim, Joon-Seok Kim, Dong-Hyuk Shin, Ieck-Chae Euom*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.2, 2026, DOI:10.32604/cmes.2026.077315 - 26 February 2026

    Abstract As attack techniques evolve and data volumes increase, the integration of artificial intelligence-based security solutions into industrial control systems has become increasingly essential. Artificial intelligence holds significant potential to improve the operational efficiency and cybersecurity of these systems. However, its dependence on cyber-based infrastructures expands the attack surface and introduces the risk that adversarial manipulations of artificial intelligence models may cause physical harm. To address these concerns, this study presents a comprehensive review of artificial intelligence-driven threat detection methods and adversarial attacks targeting artificial intelligence within industrial control environments, examining both their benefits and associated… More > Graphic Abstract

    Cybersecurity Opportunities and Risks of Artificial Intelligence in Industrial Control Systems: A Survey

  • Open Access

    ARTICLE

    AdvYOLO: An Improved Cross-Conv-Block Feature Fusion-Based YOLO Network for Transferable Adversarial Attacks on ORSIs Object Detection

    Leyu Dai1,2,3, Jindong Wang1,2,3, Ming Zhou1,2,3, Song Guo1,2,3, Hengwei Zhang1,2,3,*

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.072449 - 10 February 2026

    Abstract In recent years, with the rapid advancement of artificial intelligence, object detection algorithms have made significant strides in accuracy and computational efficiency. Notably, research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images (ORSIs). However, in the realm of adversarial attacks, developing adversarial techniques tailored to Anchor-Free models remains challenging. Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures. Furthermore, the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks. This study presents… More >

  • Open Access

    ARTICLE

    Secured-FL: Blockchain-Based Defense against Adversarial Attacks on Federated Learning Models

    Bello Musa Yakubu1,*, Nor Shahida Mohd Jamail 2, Rabia Latif 2, Seemab Latif 3

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072426 - 12 January 2026

    Abstract Federated Learning (FL) enables joint training over distributed devices without data exchange but is highly vulnerable to attacks by adversaries in the form of model poisoning and malicious update injection. This work proposes Secured-FL, a blockchain-based defensive framework that combines smart contract–based authentication, clustering-driven outlier elimination, and dynamic threshold adjustment to defend against adversarial attacks. The framework was implemented on a private Ethereum network with a Proof-of-Authority consensus algorithm to ensure tamper-resistant and auditable model updates. Large-scale simulation on the Cyber Data dataset, under up to 50% malicious client settings, demonstrates Secured-FL achieves 6%–12% higher accuracy, More >

  • Open Access

    REVIEW

    From Identification to Obfuscation: A Survey of Cross-Network Mapping and Anti-Mapping Methods

    Shaojie Min1, Yaxiao Luo1, Kebing Liu1, Qingyuan Gong2, Yang Chen1,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-23, 2026, DOI:10.32604/cmc.2025.073175 - 09 December 2025

    Abstract User identity linkage (UIL) across online social networks seeks to match accounts belonging to the same real-world individual. This cross-platform mapping enables accurate user modeling but also raises serious privacy risks. Over the past decade, the research community has developed a wide range of UIL methods, from structural embeddings to multimodal fusion architectures. However, corresponding adversarial and defensive approaches remain fragmented and comparatively understudied. In this survey, we provide a unified overview of both mapping and anti-mapping methods for UIL. We categorize representative mapping models by learning paradigm and data modality, and systematically compare them… More >

  • Open Access

    ARTICLE

    Gradient-Guided Assembly Instruction Relocation for Adversarial Attacks Against Binary Code Similarity Detection

    Ran Wei*, Hui Shu

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069562 - 10 November 2025

    Abstract Transformer-based models have significantly advanced binary code similarity detection (BCSD) by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings. Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code, existing techniques predominantly depend on inserting artificial instructions, which incur high computational costs and offer limited diversity of perturbations. To address these limitations, we propose AIMA, a novel gradient-guided assembly instruction relocation method. Our method decouples the detection model into tokenization, embedding, and encoding layers to enable efficient gradient computation. Since token IDs of instructions are… More >

  • Open Access

    ARTICLE

    DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

    Xiaohan Chen1, Yuanfang Chen1,*, Gyu Myoung Lee2, Noel Crespi3, Pierluigi Siano4

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1733-1750, 2025, DOI:10.32604/cmc.2025.067284 - 29 August 2025

    Abstract Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused… More >

  • Open Access

    ARTICLE

    Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing

    Hyeong-Gyeong Kim1, Sang-Min Choi2, Hyeon Seo2, Suwon Lee2,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4381-4397, 2025, DOI:10.32604/cmc.2025.067024 - 30 July 2025

    Abstract Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed… More >

  • Open Access

    ARTICLE

    DEMGAN: A Machine Learning-Based Intrusion Detection System Evasion Scheme

    Dawei Xu1,2,3, Yue Lv1, Min Wang1, Baokun Zheng4,*, Jian Zhao1,3, Jiaxuan Yu5

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1731-1746, 2025, DOI:10.32604/cmc.2025.064833 - 09 June 2025

    Abstract Network intrusion detection systems (IDS) are a prevalent method for safeguarding network traffic against attacks. However, existing IDS primarily depend on machine learning (ML) models, which are vulnerable to evasion through adversarial examples. In recent years, the Wasserstein Generative Adversarial Network (WGAN), based on Wasserstein distance, has been extensively utilized to generate adversarial examples. Nevertheless, several challenges persist: (1) WGAN experiences the mode collapse problem when generating multi-category network traffic data, leading to subpar quality and insufficient diversity in the generated data; (2) Due to unstable training processes, the authenticity of the data produced by… More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

Displaying 1-10 on page 1 of 28. Per Page