Home / Journals / CMES / Online First / doi:10.32604/cmes.2026.074283
Special Issues
Table of Content

Open Access

ARTICLE

CANNSkin: A Convolutional Autoencoder Neural Network-Based Model for Skin Cancer Classification

Abdul Jabbar Siddiqui1,2,*, Saheed Ademola Bello2, Muhammad Liman Gambo2, Abdul Khader Jilani Saudagar3,*, Mohamad A. Alawad4, Amir Hussain5
1 SDAIA-KFUPM Joint Research Center for Artificial Intelligence, King Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia
2 Department of Computer Engineering, King Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia
3 Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 11432, Saudi Arabia
4 Department of Electrical Engineering, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 11432, Saudi Arabia
5 School of Computing, Edinburgh Napier University, Merchiston Campus, Edinburgh, EH10 5DT, UK
* Corresponding Author: Abdul Jabbar Siddiqui. Email: email; Abdul Khader Jilani Saudagar. Email: email
(This article belongs to the Special Issue: Exploring the Impact of Artificial Intelligence on Healthcare: Insights into Data Management, Integration, and Ethical Considerations)

Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2026.074283

Received 07 October 2025; Accepted 31 December 2025; Published online 04 February 2026

Abstract

Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities, variations in skin texture, the presence of hair, and inconsistent illumination. Deep learning models have shown promise in assisting early detection, yet their performance is often limited by the severe class imbalance present in dermoscopic datasets. This paper proposes CANNSkin, a skin cancer classification framework that integrates a convolutional autoencoder with latent-space oversampling to address this imbalance. The autoencoder is trained to reconstruct lesion images, and its latent embeddings are used as features for classification. To enhance minority-class representation, the Synthetic Minority Oversampling Technique (SMOTE) is applied directly to the latent vectors before classifier training. The encoder and classifier are first trained independently and later fine-tuned end-to-end. On the HAM10000 dataset, CANNSkin achieves an accuracy of 93.01%, a macro-F1 of 88.54%, and an ROC–AUC of 98.44%, demonstrating strong robustness across ten test subsets. Evaluation on the more complex ISIC 2019 dataset further confirms the model’s effectiveness, where CANNSkin achieves 94.27% accuracy, 93.95% precision, 94.09% recall, and 99.02% F1-score, supported by high reconstruction fidelity (PSNR 35.03 dB, SSIM 0.86). These results demonstrate the effectiveness of our proposed latent-space balancing and fine-tuned representation learning as a new benchmark method for robust and accurate skin cancer classification across heterogeneous datasets.

Keywords

Computational image processing; imbalance classification; medical image analysis; melanoma; skin cancer classification
  • 102

    View

  • 29

    Download

  • 0

    Like

Share Link