Open Access
ARTICLE
Deepfake Detection Using Adversarial Neural Network
1 Department of Computer Science and Engineering, Mepco Schlenk Engineering College, Sivakasi, 626005, India
2 Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi, 626005, India
3 Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh, 11451, Saudi Arabia
* Corresponding Authors: Priyadharsini Selvaraj. Email: ; Mohammad Mehedi Hassan. Email:
Computer Modeling in Engineering & Sciences 2025, 143(2), 1575-1594. https://doi.org/10.32604/cmes.2025.064138
Received 06 February 2025; Accepted 22 April 2025; Issue published 30 May 2025
Abstract
With expeditious advancements in AI-driven facial manipulation techniques, particularly deepfake technology, there is growing concern over its potential misuse. Deepfakes pose a significant threat to society, particularly by infringing on individuals’ privacy. Amid significant endeavors to fabricate systems for identifying deepfake fabrications, existing methodologies often face hurdles in adjusting to innovative forgery techniques and demonstrate increased vulnerability to image and video clarity variations, thereby hindering their broad applicability to images and videos produced by unfamiliar technologies. In this manuscript, we endorse resilient training tactics to amplify generalization capabilities. In adversarial training, models are trained using deliberately crafted samples to deceive classification systems, thereby significantly enhancing their generalization ability. In response to this challenge, we propose an innovative hybrid adversarial training framework integrating Virtual Adversarial Training (VAT) with Two-Generated Blurred Adversarial Training. This combined framework bolsters the model’s resilience in detecting deepfakes made using unfamiliar deep learning technologies. Through such adversarial training, models are prompted to acquire more versatile attributes. Through experimental studies, we demonstrate that our model achieves higher accuracy than existing models.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.