Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (20)
  • Open Access

    ARTICLE

    From Hardening to Understanding: Adversarial Training vs. CF-Aug for Explainable Cyber-Threat Detection System

    Malik Al-Essa1,*, Mohammad Qatawneh2,1, Ahmad Sami Al-Shamayleh3, Orieb Abualghanam1, Wesam Almobaideen4,1

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.076608 - 09 April 2026

    Abstract Machine Learning (ML) intrusion detection systems (IDS) are vulnerable to manipulations: small, protocol-valid manipulations can push samples across brittle decision boundaries. We study two complementary remedies that reshape the learner in distinct ways. Adversarial Training (AT) exposes the model to worst-case, in-threat perturbations during learning to thicken local margins; Counterfactual Augmentation (CF-Aug) adds near-boundary exemplars that are explicitly constrained to be feasible, causally consistent, and operationally meaningful for defenders. The main goal of this work is to investigate and compare how AT and CF-Aug can reshape the decision surface of the IDS. eXplainable Artificial Intelligence More >

  • Open Access

    ARTICLE

    Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering

    Cheng Yang, Xianghong Tang*, Jianguang Lu, Chaobin Wang

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.076126 - 09 April 2026

    Abstract Graph neural networks (GNNs) have demonstrated impressive capabilities in processing graph-structured data, yet their vulnerability to adversarial perturbations poses serious challenges to real-world applications. Existing defense methods often fail to handle diverse types of attacks and adapt to dynamic adversarial strategies because they typically rely on static defense mechanisms or focus narrowly on a single robustness dimension. To address these limitations, we propose an adversarial attention-based robustness strategy (AARS), which is a unified framework designed to enhance the robustness of GNNs against structural and feature perturbations. AARS operates in two stages: the first stage employs More >

  • Open Access

    REVIEW

    A State-of-the-Art Survey of Adversarial Reinforcement Learning for IoT Intrusion Detection

    Qasem Abu Al-Haija1,*, Shahad Al Tamimi2

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.073540 - 10 February 2026

    Abstract Adversarial Reinforcement Learning (ARL) models for intelligent devices and Network Intrusion Detection Systems (NIDS) improve system resilience against sophisticated cyber-attacks. As a core component of ARL, Adversarial Training (AT) enables NIDS agents to discover and prevent new attack paths by exposing them to competing examples, thereby increasing detection accuracy, reducing False Positives (FPs), and enhancing network security. To develop robust decision-making capabilities for real-world network disruptions and hostile activity, NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity. The accuracy and timeliness of… More >

  • Open Access

    ARTICLE

    Robust Recommendation Adversarial Training Based on Self-Purification Data Sanitization

    Haiyan Long1, Gang Chen2,*, Hai Chen3,*

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.073243 - 10 February 2026

    Abstract The performance of deep recommendation models degrades significantly under data poisoning attacks. While adversarial training methods such as Vulnerability-Aware Training (VAT) enhance robustness by injecting perturbations into embeddings, they remain limited by coarse-grained noise and a static defense strategy, leaving models susceptible to adaptive attacks. This study proposes a novel framework, Self-Purification Data Sanitization (SPD), which integrates vulnerability-aware adversarial training with dynamic label correction. Specifically, SPD first identifies high-risk users through a fragility scoring mechanism, then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training. This closed-loop process continuously sanitizes the training More >

  • Open Access

    ARTICLE

    Mitigating Attribute Inference in Split Learning via Channel Pruning and Adversarial Training

    Afnan Alhindi*, Saad Al-Ahmadi, Mohamed Maher Ben Ismail

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072625 - 12 January 2026

    Abstract Split Learning (SL) has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency. Specifically, neural networks are divided into client and server sub-networks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices, thereby making SL particularly suitable for resource-constrained devices. Although SL prevents the direct transmission of raw data, it does not alleviate entirely the risk of privacy breaches. In fact, the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data. Moreover,… More >

  • Open Access

    ARTICLE

    HI-XDR: Hybrid Intelligent Framework for Adversarial-Resilient Anomaly Detection and Adaptive Cyber Response

    Abd Rahman Wahid*

    Journal of Cyber Security, Vol.7, pp. 589-614, 2025, DOI:10.32604/jcs.2025.071622 - 11 December 2025

    Abstract The rapid increase in cyber attacks requires accurate, adaptive, and interpretable detection and response mechanisms. Conventional security solutions remain fragmented, leaving gaps that attackers can exploit. This study introduces the HI-XDR (Hybrid Intelligent Extended Detection and Response) framework, which combines network-based Suricata rules and endpoint-based Wazuh rules into a unified dataset containing 45,705 entries encoded into 1058 features. A semantic-aware autoencoder-based anomaly detection module is trained and strengthened through adversarial learning using Projected Gradient Descent, achieving a minimum mean squared error of 0.0015 and detecting 458 anomaly rules at the 99th percentile threshold. A comparative… More >

  • Open Access

    ARTICLE

    Domain-Specific NER for Fluorinated Materials: A Hybrid Approach with Adversarial Training and Dynamic Contextual Embeddings

    Jiming Lan1, Hongwei Fu1,*, Yadong Wu1,2, Yaxian Liu1,3, Jianhua Dong1,2, Wei Liu1,2, Huaqiang Chen1,2

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4645-4665, 2025, DOI:10.32604/cmc.2025.067289 - 23 October 2025

    Abstract In the research and production of fluorinated materials, large volumes of unstructured textual data are generated, characterized by high heterogeneity and fragmentation. These issues hinder systematic knowledge integration and efficient utilization. Constructing a knowledge graph for fluorinated materials processing is essential for enabling structured knowledge management and intelligent applications. Among its core components, Named Entity Recognition (NER) plays an essential role, as its accuracy directly impacts relation extraction and semantic modeling, which ultimately affects the knowledge graph construction for fluorinated materials. However, NER in this domain faces challenges such as fuzzy entity boundaries, inconsistent terminology,… More >

  • Open Access

    ARTICLE

    Deepfake Detection Using Adversarial Neural Network

    Priyadharsini Selvaraj1,*, Senthil Kumar Jagatheesaperumal2, Karthiga Marimuthu1, Oviya Saravanan1, Bader Fahad Alkhamees3, Mohammad Mehedi Hassan3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.2, pp. 1575-1594, 2025, DOI:10.32604/cmes.2025.064138 - 30 May 2025

    Abstract With expeditious advancements in AI-driven facial manipulation techniques, particularly deepfake technology, there is growing concern over its potential misuse. Deepfakes pose a significant threat to society, particularly by infringing on individuals’ privacy. Amid significant endeavors to fabricate systems for identifying deepfake fabrications, existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations, thereby hindering their broad applicability to images and videos produced by unfamiliar technologies. In this manuscript, we endorse resilient training tactics to amplify generalization capabilities. In adversarial training, models are trained using More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Hybrid Memory-Enhanced Autoencoder with Adversarial Training for Anomaly Detection in Virtual Power Plants

    Yuqiao Liu1, Chen Pan1, YeonJae Oh2,*, Chang Gyoon Lim1,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4593-4629, 2025, DOI:10.32604/cmc.2025.061196 - 06 March 2025

    Abstract Virtual Power Plants (VPPs) are integral to modern energy systems, providing stability and reliability in the face of the inherent complexities and fluctuations of solar power data. Traditional anomaly detection methodologies often need to adequately handle these fluctuations from solar radiation and ambient temperature variations. We introduce the Memory-Enhanced Autoencoder with Adversarial Training (MemAAE) model to overcome these limitations, designed explicitly for robust anomaly detection in VPP environments. The MemAAE model integrates three principal components: an LSTM-based autoencoder that effectively captures temporal dynamics to distinguish between normal and anomalous behaviors, an adversarial training module that… More >

Displaying 1-10 on page 1 of 20. Per Page