Open Access iconOpen Access

ARTICLE

crossmark

Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

1 Department School of Computer Science and School of Cyber Science and Engineering, Nanjing University of Information Science and Technology, Nanjing, 210044, China
2 School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, 210044, China

* Corresponding Author: Eric Danso. Email: email

Journal of Cyber Security 2025, 7, 45-69. https://doi.org/10.32604/jcs.2025.063606

Abstract

Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and adaptive ensemble learning; and a 40% reduction in computational costs compared to (Projected Gradient Descent) PGD-based methods. Experimental results demonstrate state-of-the-art performance, achieving 66.50% adversarial accuracy on CIFAR-10 (outperforming TRADES by 12%) and 70.50% robustness against FGSM (Fast Gradient Sign Method) attacks on GTSRB (German Traffic Sign Recognition Benchmark). Statistical validation (p < 0.05) confirms the reliability of these improvements across multiple attack scenarios. The framework’s significance lies in its practical deployability for security-sensitive applications: in autonomous systems, it prevents adversarial spoofing of traffic signs (89.20% clean accuracy on GTSRB); in biometric security, it resists authentication bypass attempts; and in financial systems, it maintains fraud detection accuracy under attack. Unlike existing defenses that trade robustness for efficiency, our method simultaneously optimizes both through its unique combination of proactive training and reactive runtime mechanisms. This work provides a foundational advancement in adversarial defense, offering a scalable solution for protecting AI systems in healthcare diagnostics, intelligent transportation, and other critical domains where model integrity is paramount. The proposed framework establishes a new paradigm for developing attack-resistant deep learning systems without compromising computational practicality.

Keywords

Adversarial training; hybrid defense mechanisms; deep learning robustness; security-sensitive applications; adversarial attacks mitigation

Cite This Article

APA Style
Wen, X., Danso, E., Danso, S. (2025). Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms. Journal of Cyber Security, 7(1), 45–69. https://doi.org/10.32604/jcs.2025.063606
Vancouver Style
Wen X, Danso E, Danso S. Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms. J Cyber Secur. 2025;7(1):45–69. https://doi.org/10.32604/jcs.2025.063606
IEEE Style
X. Wen, E. Danso, and S. Danso, “Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms,” J. Cyber Secur., vol. 7, no. 1, pp. 45–69, 2025. https://doi.org/10.32604/jcs.2025.063606



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 185

    View

  • 69

    Download

  • 0

    Like

Related articles

Share Link