TY - EJOU AU - Dillshad, Veena AU - Khan, Muhammad Attique AU - Nazir, Muhammad AU - Ahmad, Jawad AU - AlHammadi, Dina Abdulaziz AU - Houda, Taha AU - Cho, Hee-Chan AU - Chang, Byoungchol TI - A Multi-Layers Information Fused Deep Architecture for Skin Cancer Classification in Smart Healthcare T2 - Computers, Materials \& Continua PY - 2025 VL - 83 IS - 3 SN - 1546-2226 AB - Globally, skin cancer is a prevalent form of malignancy, and its early and accurate diagnosis is critical for patient survival. Clinical evaluation of skin lesions is essential, but several challenges, such as long waiting times and subjective interpretations, make this task difficult. The recent advancement of deep learning in healthcare has shown much success in diagnosing and classifying skin cancer and has assisted dermatologists in clinics. Deep learning improves the speed and precision of skin cancer diagnosis, leading to earlier prediction and treatment. In this work, we proposed a novel deep architecture for skin cancer classification in innovative healthcare. The proposed framework performed data augmentation at the first step to resolve the imbalance issue in the selected dataset. The proposed architecture is based on two customized, innovative Convolutional neural network (CNN) models based on small depth and filter sizes. In the first model, four residual blocks are added in a squeezed fashion with a small filter size. In the second model, five residual blocks are added with smaller depth and more useful weight information of the lesion region. To make models more useful, we selected the hyperparameters through Bayesian Optimization, in which the learning rate is selected. After training the proposed models, deep features are extracted and fused using a novel information entropy-controlled Euclidean Distance technique. The final features are passed on to the classifiers, and classification results are obtained. Also, the proposed trained model is interpreted through LIME-based localization on the HAM10000 dataset. The experimental process of the proposed architecture is performed on two dermoscopic datasets, HAM10000 and ISIC2019. We obtained an improved accuracy of 90.8% and 99.3% on these datasets, respectively. Also, the proposed architecture returned 91.6% for the cancer localization. In conclusion, the proposed architecture accuracy is compared with several pre-trained and state-of-the-art (SOTA) techniques and shows improved performance. KW - Smart health; skin cancer; internet of things; deep learning; residual blocks; fusion; optimization DO - 10.32604/cmc.2025.063851