Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (18)
  • Open Access

    REVIEW

    A State-of-the-Art Survey of Adversarial Reinforcement Learning for IoT Intrusion Detection

    Qasem Abu Al-Haija1,*, Shahad Al Tamimi2

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.073540 - 10 February 2026

    Abstract Adversarial Reinforcement Learning (ARL) models for intelligent devices and Network Intrusion Detection Systems (NIDS) improve system resilience against sophisticated cyber-attacks. As a core component of ARL, Adversarial Training (AT) enables NIDS agents to discover and prevent new attack paths by exposing them to competing examples, thereby increasing detection accuracy, reducing False Positives (FPs), and enhancing network security. To develop robust decision-making capabilities for real-world network disruptions and hostile activity, NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity. The accuracy and timeliness of… More >

  • Open Access

    ARTICLE

    Robust Recommendation Adversarial Training Based on Self-Purification Data Sanitization

    Haiyan Long1, Gang Chen2,*, Hai Chen3,*

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.073243 - 10 February 2026

    Abstract The performance of deep recommendation models degrades significantly under data poisoning attacks. While adversarial training methods such as Vulnerability-Aware Training (VAT) enhance robustness by injecting perturbations into embeddings, they remain limited by coarse-grained noise and a static defense strategy, leaving models susceptible to adaptive attacks. This study proposes a novel framework, Self-Purification Data Sanitization (SPD), which integrates vulnerability-aware adversarial training with dynamic label correction. Specifically, SPD first identifies high-risk users through a fragility scoring mechanism, then applies self-purification by replacing suspicious interactions with model-predicted high-confidence labels during training. This closed-loop process continuously sanitizes the training More >

  • Open Access

    ARTICLE

    Mitigating Attribute Inference in Split Learning via Channel Pruning and Adversarial Training

    Afnan Alhindi*, Saad Al-Ahmadi, Mohamed Maher Ben Ismail

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072625 - 12 January 2026

    Abstract Split Learning (SL) has been promoted as a promising collaborative machine learning technique designed to address data privacy and resource efficiency. Specifically, neural networks are divided into client and server sub-networks in order to mitigate the exposure of sensitive data and reduce the overhead on client devices, thereby making SL particularly suitable for resource-constrained devices. Although SL prevents the direct transmission of raw data, it does not alleviate entirely the risk of privacy breaches. In fact, the data intermediately transmitted to the server sub-model may include patterns or information that could reveal sensitive data. Moreover,… More >

  • Open Access

    ARTICLE

    HI-XDR: Hybrid Intelligent Framework for Adversarial-Resilient Anomaly Detection and Adaptive Cyber Response

    Abd Rahman Wahid*

    Journal of Cyber Security, Vol.7, pp. 589-614, 2025, DOI:10.32604/jcs.2025.071622 - 11 December 2025

    Abstract The rapid increase in cyber attacks requires accurate, adaptive, and interpretable detection and response mechanisms. Conventional security solutions remain fragmented, leaving gaps that attackers can exploit. This study introduces the HI-XDR (Hybrid Intelligent Extended Detection and Response) framework, which combines network-based Suricata rules and endpoint-based Wazuh rules into a unified dataset containing 45,705 entries encoded into 1058 features. A semantic-aware autoencoder-based anomaly detection module is trained and strengthened through adversarial learning using Projected Gradient Descent, achieving a minimum mean squared error of 0.0015 and detecting 458 anomaly rules at the 99th percentile threshold. A comparative… More >

  • Open Access

    ARTICLE

    Domain-Specific NER for Fluorinated Materials: A Hybrid Approach with Adversarial Training and Dynamic Contextual Embeddings

    Jiming Lan1, Hongwei Fu1,*, Yadong Wu1,2, Yaxian Liu1,3, Jianhua Dong1,2, Wei Liu1,2, Huaqiang Chen1,2

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4645-4665, 2025, DOI:10.32604/cmc.2025.067289 - 23 October 2025

    Abstract In the research and production of fluorinated materials, large volumes of unstructured textual data are generated, characterized by high heterogeneity and fragmentation. These issues hinder systematic knowledge integration and efficient utilization. Constructing a knowledge graph for fluorinated materials processing is essential for enabling structured knowledge management and intelligent applications. Among its core components, Named Entity Recognition (NER) plays an essential role, as its accuracy directly impacts relation extraction and semantic modeling, which ultimately affects the knowledge graph construction for fluorinated materials. However, NER in this domain faces challenges such as fuzzy entity boundaries, inconsistent terminology,… More >

  • Open Access

    ARTICLE

    Deepfake Detection Using Adversarial Neural Network

    Priyadharsini Selvaraj1,*, Senthil Kumar Jagatheesaperumal2, Karthiga Marimuthu1, Oviya Saravanan1, Bader Fahad Alkhamees3, Mohammad Mehedi Hassan3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.2, pp. 1575-1594, 2025, DOI:10.32604/cmes.2025.064138 - 30 May 2025

    Abstract With expeditious advancements in AI-driven facial manipulation techniques, particularly deepfake technology, there is growing concern over its potential misuse. Deepfakes pose a significant threat to society, particularly by infringing on individuals’ privacy. Amid significant endeavors to fabricate systems for identifying deepfake fabrications, existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations, thereby hindering their broad applicability to images and videos produced by unfamiliar technologies. In this manuscript, we endorse resilient training tactics to amplify generalization capabilities. In adversarial training, models are trained using More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Hybrid Memory-Enhanced Autoencoder with Adversarial Training for Anomaly Detection in Virtual Power Plants

    Yuqiao Liu1, Chen Pan1, YeonJae Oh2,*, Chang Gyoon Lim1,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4593-4629, 2025, DOI:10.32604/cmc.2025.061196 - 06 March 2025

    Abstract Virtual Power Plants (VPPs) are integral to modern energy systems, providing stability and reliability in the face of the inherent complexities and fluctuations of solar power data. Traditional anomaly detection methodologies often need to adequately handle these fluctuations from solar radiation and ambient temperature variations. We introduce the Memory-Enhanced Autoencoder with Adversarial Training (MemAAE) model to overcome these limitations, designed explicitly for robust anomaly detection in VPP environments. The MemAAE model integrates three principal components: an LSTM-based autoencoder that effectively captures temporal dynamics to distinguish between normal and anomalous behaviors, an adversarial training module that… More >

  • Open Access

    ARTICLE

    Improving Robustness for Tag Recommendation via Self-Paced Adversarial Metric Learning

    Zhengshun Fei1,*, Jianxin Chen1, Gui Chen2, Xinjian Xiang1,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4237-4261, 2025, DOI:10.32604/cmc.2025.059262 - 06 March 2025

    Abstract Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics. However, metric learning methods often suffer from high sensitivity, leading to unstable recommendation results when facing adversarial samples generated through malicious user behavior. Adversarial training is considered to be an effective method for improving the robustness of tag recommendation systems and addressing adversarial samples. However, it still faces the challenge of overfitting. Although curriculum learning-based adversarial training somewhat mitigates this issue, challenges still exist, such as the lack of a quantitative… More >

  • Open Access

    ARTICLE

    Mathematical Named Entity Recognition Based on Adversarial Training and Self-Attention

    Qiuyu Lai1,2, Wang Kang3, Lei Yang1,2, Chun Yang1,2,*, Delin Zhang2,*

    Intelligent Automation & Soft Computing, Vol.39, No.4, pp. 649-664, 2024, DOI:10.32604/iasc.2024.051724 - 06 September 2024

    Abstract Mathematical named entity recognition (MNER) is one of the fundamental tasks in the analysis of mathematical texts. To solve the existing problems of the current neural network that has local instability, fuzzy entity boundary, and long-distance dependence between entities in Chinese mathematical entity recognition task, we propose a series of optimization processing methods and constructed an Adversarial Training and Bidirectional long short-term memory-Selfattention Conditional random field (AT-BSAC) model. In our model, the mathematical text was vectorized by the word embedding technique, and small perturbations were added to the word vector to generate adversarial samples, while More >

Displaying 1-10 on page 1 of 18. Per Page