Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (13)
  • Open Access

    ARTICLE

    Deepfake Detection Using Adversarial Neural Network

    Priyadharsini Selvaraj1,*, Senthil Kumar Jagatheesaperumal2, Karthiga Marimuthu1, Oviya Saravanan1, Bader Fahad Alkhamees3, Mohammad Mehedi Hassan3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.2, pp. 1575-1594, 2025, DOI:10.32604/cmes.2025.064138 - 30 May 2025

    Abstract With expeditious advancements in AI-driven facial manipulation techniques, particularly deepfake technology, there is growing concern over its potential misuse. Deepfakes pose a significant threat to society, particularly by infringing on individuals’ privacy. Amid significant endeavors to fabricate systems for identifying deepfake fabrications, existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations, thereby hindering their broad applicability to images and videos produced by unfamiliar technologies. In this manuscript, we endorse resilient training tactics to amplify generalization capabilities. In adversarial training, models are trained using More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Hybrid Memory-Enhanced Autoencoder with Adversarial Training for Anomaly Detection in Virtual Power Plants

    Yuqiao Liu1, Chen Pan1, YeonJae Oh2,*, Chang Gyoon Lim1,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4593-4629, 2025, DOI:10.32604/cmc.2025.061196 - 06 March 2025

    Abstract Virtual Power Plants (VPPs) are integral to modern energy systems, providing stability and reliability in the face of the inherent complexities and fluctuations of solar power data. Traditional anomaly detection methodologies often need to adequately handle these fluctuations from solar radiation and ambient temperature variations. We introduce the Memory-Enhanced Autoencoder with Adversarial Training (MemAAE) model to overcome these limitations, designed explicitly for robust anomaly detection in VPP environments. The MemAAE model integrates three principal components: an LSTM-based autoencoder that effectively captures temporal dynamics to distinguish between normal and anomalous behaviors, an adversarial training module that… More >

  • Open Access

    ARTICLE

    Improving Robustness for Tag Recommendation via Self-Paced Adversarial Metric Learning

    Zhengshun Fei1,*, Jianxin Chen1, Gui Chen2, Xinjian Xiang1,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4237-4261, 2025, DOI:10.32604/cmc.2025.059262 - 06 March 2025

    Abstract Tag recommendation systems can significantly improve the accuracy of information retrieval by recommending relevant tag sets that align with user preferences and resource characteristics. However, metric learning methods often suffer from high sensitivity, leading to unstable recommendation results when facing adversarial samples generated through malicious user behavior. Adversarial training is considered to be an effective method for improving the robustness of tag recommendation systems and addressing adversarial samples. However, it still faces the challenge of overfitting. Although curriculum learning-based adversarial training somewhat mitigates this issue, challenges still exist, such as the lack of a quantitative… More >

  • Open Access

    ARTICLE

    Mathematical Named Entity Recognition Based on Adversarial Training and Self-Attention

    Qiuyu Lai1,2, Wang Kang3, Lei Yang1,2, Chun Yang1,2,*, Delin Zhang2,*

    Intelligent Automation & Soft Computing, Vol.39, No.4, pp. 649-664, 2024, DOI:10.32604/iasc.2024.051724 - 06 September 2024

    Abstract Mathematical named entity recognition (MNER) is one of the fundamental tasks in the analysis of mathematical texts. To solve the existing problems of the current neural network that has local instability, fuzzy entity boundary, and long-distance dependence between entities in Chinese mathematical entity recognition task, we propose a series of optimization processing methods and constructed an Adversarial Training and Bidirectional long short-term memory-Selfattention Conditional random field (AT-BSAC) model. In our model, the mathematical text was vectorized by the word embedding technique, and small perturbations were added to the word vector to generate adversarial samples, while More >

  • Open Access

    ARTICLE

    Improving Diversity with Multi-Loss Adversarial Training in Personalized News Recommendation

    Ruijin Xue1,2, Shuang Feng1,2,*, Qi Wang1,2

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 3107-3122, 2024, DOI:10.32604/cmc.2024.052600 - 15 August 2024

    Abstract Users’ interests are often diverse and multi-grained, with their underlying intents even more so. Effectively capturing users’ interests and uncovering the relationships between diverse interests are key to news recommendation. Meanwhile, diversity is an important metric for evaluating news recommendation algorithms, as users tend to reject excessive homogeneous information in their recommendation lists. However, recommendation models themselves lack diversity awareness, making it challenging to achieve a good balance between the accuracy and diversity of news recommendations. In this paper, we propose a news recommendation algorithm that achieves good performance in both accuracy and diversity. Unlike… More >

  • Open Access

    ARTICLE

    LDAS&ET-AD: Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation

    Shuyi Li, Hongchao Hu*, Xiaohan Yang, Guozhen Cheng, Wenyan Liu, Wei Guo

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2331-2359, 2024, DOI:10.32604/cmc.2024.047275 - 15 May 2024

    Abstract Adversarial distillation (AD) has emerged as a potential solution to tackle the challenging optimization problem of loss with hard labels in adversarial training. However, fixed sample-agnostic and student-egocentric attack strategies are unsuitable for distillation. Additionally, the reliability of guidance from static teachers diminishes as target models become more robust. This paper proposes an AD method called Learnable Distillation Attack Strategies and Evolvable Teachers Adversarial Distillation (LDAS&ET-AD). Firstly, a learnable distillation attack strategies generating mechanism is developed to automatically generate sample-dependent attack strategies tailored for distillation. A strategy model is introduced to produce attack strategies that… More >

  • Open Access

    ARTICLE

    Boosting Adversarial Training with Learnable Distribution

    Kai Chen1,2, Jinwei Wang3, James Msughter Adeke1,2, Guangjie Liu1,2,*, Yuewei Dai1,4

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3247-3265, 2024, DOI:10.32604/cmc.2024.046082 - 26 March 2024

    Abstract In recent years, various adversarial defense methods have been proposed to improve the robustness of deep neural networks. Adversarial training is one of the most potent methods to defend against adversarial attacks. However, the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training. This paper proposes a learnable distribution adversarial training method, aiming to construct the same distribution for training data utilizing the Gaussian mixture model. The distribution centroid is built to classify samples and constrain the distribution of the sample features. The… More >

  • Open Access

    ARTICLE

    Instance Reweighting Adversarial Training Based on Confused Label

    Zhicong Qiu1,2, Xianmin Wang1,*, Huawei Ma1, Songcao Hou1, Jing Li1,2,*, Zuoyong Li2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1243-1256, 2023, DOI:10.32604/iasc.2023.038241 - 21 June 2023

    Abstract Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks, which lies in the fact that examples closer to the decision boundaries are much more vulnerable to being attacked and should be given larger weights. The probability margin (PM) method is a promising approach to continuously and path-independently measuring such closeness between the example and decision boundary. However, the performance of PM is limited due to the fact that PM fails to effectively distinguish the examples having only one misclassified category and the ones with multiple misclassified categories, where… More >

  • Open Access

    ARTICLE

    Unsupervised Anomaly Detection Approach Based on Adversarial Memory Autoencoders for Multivariate Time Series

    Tianzi Zhao1,2,3,4, Liang Jin1,2,3,*, Xiaofeng Zhou1,2,3, Shuai Li1,2,3, Shurui Liu1,2,3,4, Jiang Zhu1,2,3

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 329-346, 2023, DOI:10.32604/cmc.2023.038595 - 08 June 2023

    Abstract The widespread usage of Cyber Physical Systems (CPSs) generates a vast volume of time series data, and precisely determining anomalies in the data is critical for practical production. Autoencoder is the mainstream method for time series anomaly detection, and the anomaly is judged by reconstruction error. However, due to the strong generalization ability of neural networks, some abnormal samples close to normal samples may be judged as normal, which fails to detect the abnormality. In addition, the dataset rarely provides sufficient anomaly labels. This research proposes an unsupervised anomaly detection approach based on adversarial memory… More >

Displaying 1-10 on page 1 of 13. Per Page