Open Access
ARTICLE
Robust Skin Cancer Detection through CNN-Transformer-GRU Fusion and Generative Adversarial Network Based Data Augmentation
1 Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, 110063, India
2 Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi, 110063, India
3 Applied College of Dhahran Aljunub, Department of Computer Science, King Khalid University, Aseer, Abha, 64261, Saudi Arabia
4 Department of Computer Science, College of Computer Science, Applied College Tanumah, King Khalid University, Abha, 61413, Saudi Arabia
5 Department of Computer Science, Technical and Engineering Specialties Unit, King Khalid University, Muhayil, 63699,
Saudi Arabia
6 Galgotias Multi-Disciplinary Research & Development Cell (G-MRDC), Galgotias University, Greater Noida, 201308, UP, India
7 CSE Department, Technocrats Institute of Technology, Bhopal, 462022, India
8 Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA 02115, USA
9 Department of Pharmacology & Toxicology, University of Arizona, Tucson, AZ 85721, USA
* Corresponding Authors: Mudassir Khan. Email: ; Saurav Mallik. Email:
,
Computer Modeling in Engineering & Sciences 2025, 144(2), 1767-1791. https://doi.org/10.32604/cmes.2025.067999
Received 19 May 2025; Accepted 31 July 2025; Issue published 31 August 2025
Abstract
Skin cancer remains a significant global health challenge, and early detection is crucial to improving patient outcomes. This study presents a novel deep learning framework that combines Convolutional Neural Networks (CNNs), Transformers, and Gated Recurrent Units (GRUs) for robust skin cancer classification. To address data set imbalance, we employ StyleGAN3-based synthetic data augmentation alongside traditional techniques. The hybrid architecture effectively captures both local and global dependencies in dermoscopic images, while the GRU component models sequential patterns. Evaluated on the HAM10000 dataset, the proposed model achieves an accuracy of 90.61%, outperforming baseline architectures such as VGG16 and ResNet. Our system also demonstrates superior precision (91.11%), recall (95.28%), and AUC (0.97), highlighting its potential as a reliable diagnostic tool for the detection of melanoma. This work advances automated skin cancer diagnosis by addressing critical challenges related to class imbalance and limited generalization in medical imaging.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools