[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.028561
images
Article

An Enhanced Deep Learning Method for Skin Cancer Detection and Classification

Mohamed W. Abo El-Soud1,2,*, Tarek Gaber2,3, Mohamed Tahoun2 and Abdullah Alourani1

1Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, Al-Majmaah, 11952, Saudi Arabia
2Faculty of Computers and Informatics, Suez Canal University, Ismailia, 41522, Egypt
3School of Science, Engineering, and Environment, University of Salford, UK
*Corresponding Author: Mohamed W. Abo El-Soud. Email: m.wagieh@mu.edu.sa
Received: 12 February 2022; Accepted: 12 April 2022

Abstract: The prevalence of melanoma skin cancer has increased in recent decades. The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins. Thus, the early diagnosis of melanoma is a key factor in improving the prognosis of the disease. Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images. Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases. This paper proposes a new method which can be used for both skin lesion segmentation and classification problems. This solution makes use of Convolutional neural networks (CNN) with the architecture two-dimensional (Conv2D) using three phases: feature extraction, classification and detection. The proposed method is mainly designed for skin cancer detection and diagnosis. Using the public dataset International Skin Imaging Collaboration (ISIC), the impact of the proposed segmentation method on the performance of the classification accuracy was investigated. The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%, sensitivity of 92% and specificity of 96%. Also comparing with the related work using the same dataset, i.e., ISIC, showed a better performance of the proposed method.

Keywords: Convolution neural networks; activation function; separable convolution 2D; batch normalization; max pooling; classification

1  Introduction

Skin cancer is one of the most dangerous types of cancer. It is caused by deoxyribonucleic acid (DNA) damage and can lead to death. The cell-damaged DNA begins to grow unexpectedly and rapidly increases. In 2021, it was estimated that 196,060 new cases of melanoma would be diagnosed in the USA alone, including 101,280 noninvasive (in situ), and 106,110 invasive cases. Thus, the demand for effective and rapid clinical examination methods is continuously growing [1]. According to statistical data from the World Health Organization (WHO), 2–3 million non-melanoma skin cancers and 132,000 melanoma skin cancers occur globally each year [2]. Therefore, modern medical science is seeking to assist dermatologists in their diagnoses without the need for special or expensive equipment. This model will help remote patients by providing a fast and accurate detection method for skin cancer. The early detection of skin cancer is associated with a better prognosis, allowing melanoma to be treated successfully. However, detecting the early signs of skin cancer by texture, shape, color, and size is challenging because the cancerous structures have many features in common with normal skin tissue. To improve the recognition rate, computer-aided dermoscopy (CAD) has been used [3].

Because of the severity of melanoma, significance of early diagnosis, shortage of trained pro-fessionals in some regions, and less than perfect unaided classification methods, there exists a strong motivation to develop and utilize computer-aided diagnosis (CADx) systems to aid in the classification of skin lesions. Traditional computer vision algorithms are mainly used as classifiers to extract features such as the shape, size, color, and texture in order to detect cancer. However, they are challenges in the detection of skin lesions, including low contrast, hair artifacts, irregular color illumination, and boundaries. Nowadays, artificial intelligence (AI) has gained the aptitude to address these problems. Deep learning utilizes a group of interconnected nodes and can be used effectively in the detection of melanoma. Its structure is similar to that of the human brain in terms of neural connectivity. Neural network nodes work collectively to solve specific problems by training on specific tasks [4]. The convolutional neural network (CNN) algorithm is one of the most widely recognized deep-learning algorithms [1]. In this study, we investigated such a system. Specifically, we examined an intelligent medical imaging-based skin lesion diagnosis system to assist in determining whether the skin lesion shown in a dermoscopic image is malignant or benign.

There are several papers addressed the early skin cancer detection problem using deep learning [13]. However, the results reported in these papers still do not satisfy the required performance in early melanoma detection. Therefore, the main aim of this study is to use the latest developments in deep learning to implement a classifier that is capable of examining an image containing a skin lesion and predicting an outcome (malignant or benign) with a sufficiently high degree of confidence to enhance current early melanoma detection methods. More specifically, it is desirable to have an intelligent model which can differentiate between malignant skin lesions from benign ones and also to predict, based on a photo of a suspicious mole or patch, the occurrence of malignant skin lesions and other types of diseases that would require medical assistance.

The main goal of this method is to classify skin images and diagnose melanoma (skin cancer) with improved accuracy by utilizing deep learning models. This was achieved by proposing a method consisting of four basic stages: segmentation, feature extraction, feature selection, and classification. To achieve, the classification, an enhanced CNN was proposed which makes use of two-dimensional convolutional layer (Conv2D) and a new order of the CNN layers Conv2D.

The main contribution of the proposed method is as follows: 1-proposing a novel order of the CNN layers and its architecture, two-dimensional convolutional layer (Conv2D) and use them for the early detection of melanoma (skin cancer) from images. 2-Evaluating the proposed method with the well-known cancer detection metrics specificity, sensitivity, and accuracy. 3-Comparing and analysing the results with the related work.

The rest of this paper is organized as follows. A brief survey of the literature is provided in Section 2. Section 3 provides an overview of the techniques and algorithms used in the proposed method. The proposed method is presented in Section 4, and Section 5 discusses the experimental work and its results. Finally, conclusions are presented in Section 6.

2  Related Work

Melanoma is a common type of cancer that affects a large number of people worldwide. Deep learning methods have been shown to have the ability to classify images with high accuracy in different fields. This study utilized deep learning to automatically detect melanomas in dermatoscopy images. This section reviews some of these iterations.

In [5], the proposed algorithm was divided into two parts: the hair removal process and a deep learning technique for the classification of skin lesions utilizing dermatoscopy images. They utilized morphological operators and in-painting in the hair removal process, and then the deep learning technique was utilized to detect and remove any hairs remaining in the image. This was a pre-processing stage for the classification of melanomas of hair-bearing skin and added extra features to the images. In addition, they showed that the pre-processing stage increased the classification accuracy and assisted in melanoma detection. However, they did not explore the skin colors of people using various lesion images. They evaluated skin cancer detection and the classifier utilizing dataset dermoscopic image (PH2).

In [6], they proposed an artificial bee colony for the detection of melanoma. Although the computation time was very fast for melanoma image detection, the specialist would still have to perform a careful analysis depending on the patient’s information. They obtained the best results in the utilization of the image databases compared with other related works. The accuracy, sensitivity, and specificity of the classification using their methodology nearly reached a 100% success rate. The proposed technique was more suitable for melanoma detection than other methods. In [7], they proposed a classification method for 12 skin lesions that achieved high results. In addition, they presented studies on the model decision constriction process using interpretability methods. However, this method needs to be further tested with different data (ages and ethnicities), which would improve the results. In [8], they focused on the implications of interesting interactive Content-Based Image Retrieval (CBIR) tools and examined their classification accuracies. Based on their results, this system may help people as educational tools and in image interpretation, allowing them to diagnose similar images. However, to confirm these results, more studies with physicians and experts are needed. In [9], they evaluated the performance of a combination of data balancing methods and machine learning techniques related to the classification of skin cancer. Residual Networks (ResNet), which used random forest techniques, was used to extract features that satisfied the best recall value by utilizing a pipeline by adding noisy and synthetic cleaning before the training.

In [10], they presented a CNN to improve patient phenotyping accuracy without re-quiring any inputs from users. Then, they considered the deep learning interpretability by calculating the gradient-established saliency in order to define the phases related to various phenotypes. They proposed the utilization of deep learning to assist clinicians during chart review by highlighting phrases regarding patient phenotypes. In addition, this methodology could be used to support the definitions of billing codes from phases. In [11], they proposed six interpretable and discriminative representations for distinguishing skin lesions by mixing the accepted dermatological standards. Their experiments were based on outperforming the deep features and low-level features. The performance on clinical skin disease images (198 categories) was found to be comparable to that of dermatologists. In [12], they proposed an unsupervised deep learning framework as a sparse stacked autoencoder for the detection of translucency from clinical Basal cell carcinoma (BCC) image patches. This framework had a detection accuracy of 93%, with a sensitivity and specificity of 77% and 97.1%, respectively. The results of this framework could be used for translucency detection in skin patch images. This framework will be developed to infer translucency in skin image lesions related to skin lesion patches. Furthermore, a CADx system was used for BCC based on the translucency and diagnostic features.

In [13], they presented different non-invasive methods for the classification and detection of skin cancer. The detection of melanoma requires different steps such as preprocessing, seg-mentation, feature extraction, and classification. In this paper, they presented a survey on various algorithms such as the Support Vector Machine (SVM), Asymmetry, Border, Color and Diameter (ABCD) rule, genetic algorithm, and CNN. Each algorithm had advantages and disadvantages. From their results, the SVM had the least disadvantages, but the advantages of back propagation and K-means clustering neural networks outweighed those of the other algorithms. In [14], they built a CNN model to predict new cases of melanoma. They divided the CNN model into three phases. The first phase was the preparation of the dataset, which included four processes. The first process was segmentation, which was used to detect the Region of Interest (ROI) in digital images. The second process was pre-processing, which used a bilateral filter to maintain sharp edges in the image. The third process was reducing the dimensions and complexity of the images by converting them into grayscale and then utilizing the Canny edge detection algorithm to detect the edges of the objects in the images. The fourth process involved the extraction of the final object using the bitwise algorithm. The second phase was the CNN layers, which were based on convolution layers (applied three times), max-pooling layers (applied three times), and fully connected layers (applied four times). The last phase was testing the CNN model to obtain results with an accuracy of 0.74.

In [15], they utilized image acquisition, preprocessing, segmentation, noise removal, and feature extraction. We utilized supervised machine learning using the cubic regression method to train the machine, which automatically detected that the skin cancer stage was benign or a melanoma. In [16], they used deep learning models in its core implementations to construct models to assist in predicting skin cancer. The deep learning models were tested on datasets, and a metric area under the curve of 99.77% was observed. In [17], they proposed a technique that utilized a meta-heuristic algorithm for a CNN to train the biases and weights of the network based on back propagation. The objective of this technique was to minimize the error rate of the learning step for the CNN. The proposed technique was tested on images from the Dermquest and DermIS digital databases and compared with ten other classification techniques.

In [18], they introduced a two-step auto classification framework for skin melanoma images that utilized transfer learning and adversarial training to detect melanoma. In the first step, they took advantage of inter-category variance to distribute data for a conditional image synthesis task by learning inter-category synthesizing and mapping using representative category images from the over-represented samples utilizing non-paired image-to-image translation. In the second step, they trained a CNN to classify melanoma using a training set associated with the synthesized under-represented category images. This classifier as trained by decreasing the focal loss process, which helped the model learn from difficult examples, while decreasing the significance for easy examples. They demonstrated through many experiments that the proposed MelaNet algorithm improved the sensitivity by a margin of 13.10% and an area under the receiver operating characteristic curve (AUC) of 0.78% from 1627 images. In [19], they proposed the eVida M6 model. The automatic extraction of the ROI within a dermatoscopic image provided a significant improvement in the classification performance by eliminating pixels from the image that did not provide the classifier with lesion information. This model was a reliable predictor, with an excellent balance between the overall accuracy (0.904), sensitivity (0.820), and specificity (0.925).

It is possible to build an intelligent system to detect melanoma skin cancer using the deep learning Conv2D method. The proposed method includes components for system creation, data setup load, network building, training the network, testing the network, and code generation. Different deep learning CNN algorithms can be sequentially used. The main objective of the work presented here was the early detection of melanoma skin cancer using an enhanced CNN that produces the best accuracy in the detection of melanoma.

3  Preliminaries

This section presents an overview of the CNN algorithm used in the proposed framework. It highlights three different CNN layers: depthwise separable 2D convolution, batch normalization, and max pooling 2D layers. The CNN layers are shown in Fig. 1.

images

Figure 1: The architecture of convolution neural network (CNN) layers

3.1 Depthwise Separable Convolution 2D Layer

Separable convolutions consist of first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution that mixes the resulting output channels. The depth multiplier argument controls the number of output channels generated per input channel in the depthwise step, as shown in Fig. 2 [20].

images

Figure 2: The architecture of depthwise separable convolution 2D

3.2 Rectified Linear Units (ReLU)

An ReLU is an activation function that is used to improve the training in CNN deep learning and has a strong mathematical and biological basis. It works using a thresholding value of zero when y<0 In contrast, it produces a linear function when y0, as shown in Fig. 3 [21]. The output of ReLU activation function is 0 when y<0 on axis x. The output is a linear with slope 1 when y0 on axis x.

images

Figure 3: The representation of Rectified Linear Units (ReLU) activation function

3.3 Batch Normalization Layer

Batch normalization is a technique for training deep neural networks that standardizes the inputs to a layer for each mini-batch. This stabilizes the learning process and dramatically reduces the number of training epochs required to train deep networks [22].

3.4 Max Pooling 2D Layer

Pooling layers provide an approach to down-sampling feature maps by summarizing the presence of features in the patches of the feature map. Two common pooling methods are average pooling and max pooling, which summarize the average presence of a feature and the most activated presence of a feature, respectively.

3.5 Flatten Layer

The flatten layer is built after the depthwise separable convolution layer and is used to decrease the dimensions of the parameters. These parameters include tags and features used for classification and detection. In addition, the flatten layer does not make a difference to the batch size as shown in Fig. 4 [23].

images

Figure 4: Flatten layer example

3.6 Softmax Activation Function

The softmax function is a series of sigmoid algorithms and is utilized for multiple class classification. This layer will have the same number of classes as the number of neurons. It can be represented as Eq. 1 [24].

ϖ(u)i=euij=1jeui(1)

4  The Proposed Method

This paper proposes an automated classification method to detect the presence of melanoma, which consists of four main stages, as depicted in Tab. 1. The purpose of these stages is to filter areas in the skin images that may contain a skin lesion to detect the presence of melanoma. We trained, tested, and validated this method using the resultant dataset, as shown in Algorithm 1.

images

images

The Conv2D algorithm summarizes the model in nine steps. In the first step, we enter an image that is either benign or melanoma. In the second step, we apply a depthwise separable 2D convolution layer to increase the efficiency of an image and reduce the complexity. In the third step, we apply the activation function (ReLU) to solve the problem of the vanishing gradient and allow the model to perform better and learn faster. In the fourth step, we apply a batch normalization layer by performing centering, scaling, and affine transformation to decrease the number of required training epochs. In the fifth step, we apply a max-pooling 2D layer to progressively decrease the total number of computations and parameters in the network. In the sixth step, we apply the previous layers three times. In the seventh step, we apply a flatten layer to flatten the output of the max-pooling layer into one column in order to input this column into an artificial neural network (ANN) for further processing. In the eighth step, we apply the softmax activation function, which is utilized for nonlinear problems to distinguish between classes. In the last step, the model evaluates the final resultant image as either benign or melanoma as it is shown in Fig. 5.

images

Figure 5: The architecture of conv2D method

The below Tab. 2 gives a summary of the model like the output image and the number of filters for each layer. The first SeparableConv2D layer of 3 × 3 feature maps was calculated 32filters over the image input. Similarly, the second SeparableConv2D layer calculates 64 filters and the third SeparableConv2D layer computes 128 filters. The next main layer is Maxpooling2D that is after each SeparableConv2D layer. The objective of this layer is to down sample the feature maps and reduces the dimensionality of the images. The output of the first Maxpooling2D layer is 74 × 74. Similarly, the second Maxpooling2D layer is 36 × 36, the third Maxpooling2D layer is 17 × 17 and the fourth Maxpooling2D layer is 7 × 7. In this method, the images are evaluated as “melanoma” or “benign” cases, producing two classifiers whose performances were tested using accuracy, specificity, precision, sensitivity, and f1-score metrics.

images

5  Experiments and Analysis

5.1 Dataset

The image dataset utilized in the proposed work was a known public dataset for skin cancer, both malignant and benign, which was published in [25]. It contains skin cancer images that are used to evaluate the detection of skin cancer. It consists of 10018 images, which were divided into two sections. The test section consisted of 5003 and 5015 images for benign and melanoma tumors, respectively. The training section consisted of 3502 and 3511 images for benign and melanoma tumors, respectively. Radiologists confirmed all the datasets and their annotations. To evaluate the proposed deep learning Conv2D method, 2637 images were utilized to train the proposed method, and the remaining were utilized to test and validate the proposed method.

All the experiments were implemented on an Apple MacBook Air 13 laptop with an 8-core Graphics Processing Unit (GPU) and a 512 GB Solid State Drive (SSD). The implementation was compiled using Python 3.8 on the Apple Mac-Book Air 13. We implemented a model to evaluate the proposed Conv2D method and their parameters which giving the best performance. We designed two major scenarios using a public dataset which was published in [25] to evaluate the proposed method. The results of these scenarios were evaluated using several known measures for skin cancer detection systems. We used the accuracy, specificity, sensitivity, precision and f1-score metrics. These experiments are conducted in order to verify that statistical analysis’s results can be used in any datasets. We will define the applied measures int these experiments, following the ones in [26]. More details about the experiments are given below .

5.2 Evaluation Metrics

The accuracy represented the number suitable classifications over the total evaluated elements, as expressed in the following mathematical equation. The specificity and sensitivity metrics were used to evaluate the performance in the field of medicine [27].

Accuracy=TrP+TrNTrP+TrN+FaP+FaN(10)

The specificity represented the number of correctly classified negative elements, as expressed in the following mathematical equation [27].

Specificity=TrNTrN+FaP(11)

The sensitivity represented the number of correctly classified positive elements, as expressed in the following mathematical equation [27].

Sensitivity=TrPTrP+FaN(12)

The precision represented the number of correctly classified elements out of all the positive elements classified, as expressed in the following mathematical equation [27].

Precision=TrPTrP+FaP(13)

The f1-score represented the harmonic mean of the recall and precision [27].

F1Score=2×TrP2×TrP+FaP+FaN(14)

5.3 Experiments and Their Results

Two major scenarios were designed and conducted using a public ISIC dataset [25] to evaluate the proposed method. The description of each scenario, its results and discussion are given below.

5.3.1 Scenario 1: Conv2D Epochs

The goal of this scenario is to investigate the impact of number of epochs on the performance of the proposed solution. The best results will be specified depended on the highest classification measures achieved. To run the experiments of this scenario, the following settings were used.

1.    All the images were resized to the size of 150 × 150.

2.    The model divided these images into 70% for training, 10% for validation and 20% for testing with learning rate value of 0.01.

3.    The used activation functions are ReLU and Softmax.

4.    The set of Conv2D layers were fixed for the number of epochs of testing for these images (25, 50, 75, 100, 125 and 159) epochs.

5.3.2 The Results of Scenario 1

The results of training and testing of the various number of epochs are compared as given in Tab. 3. From this table, it can be noticed that the best results were obtained when 100 epochs were used on the input images. These results were evaluated using the metrics: specificity; sensitivity; precision, accuracy and f1-scores. The following code was used to create the graph of testing and validation accuracy vs. epochs as it is shown in Fig. 6. It can be noticed that the accuracy oftesting and validation increases when the number of epochs increases until the number of epochsarrives 100, the testing accuracy is 94% and the validation accuracy is 93.6%.

images

images

Figure 6: Testing and validation accuracy

5.3.3 Scenario 2: Conv2D Learning Rate

The objective of this scenario is to determine the effect of the learning rate of the best Conv2D learning rate (obtained from Scenario 1). We aim to test which value of the learning rate would give the highest metrics results. In this scenario, the following steps were followed.

1.    For each of the above setup, learning rate η = 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 was tested and the obtained results were recorded.

2.    For each of the above setup, the set of layers were fixed for each experiment with the same activation functions.

5.3.4 The Results of Scenario 2

Based on the results given in Scenario 1, the number of epochs was 100 that all these measures were tested with different learning rates. All metrics results were presented in Tab. 4. The best results of the whole metrics were obtained utilizing the “ReLU” and “Softmax” activation functions, the learning rate was 0.01. The importance of the proposed method was its early detection ability for melanoma skin cancer using the deep learning Conv2D.

images

5.4 Comparison with Related Work

To further evaluate our obtained results, we compared them with the results of the relatedwork discussed in Section (2). The compared work was selected based on proposals for using adeep learning Conv2D for the early detection of melanoma using a public dataset in terms of theaccuracy, specificity, and sensitivity. A summary of this comparison is provided in Tab. 5. In [5], their results included those classification during validation and testing. Their model divided the PH2 images into 70% for training, 10% for validation, and 20% for testing with a learning ratevalue of 0.001. They repeated their operations with three classes (common nevus, atypical nevus,and melanoma), and with two classes (benign and melanoma). The best accuracy results of theirproposed system were 96%, 86%, and 88% for melanoma, common nevus, and atypical nuclei,respectively. In [14], they used the ISIC dataset with their proposed method, with 600 images for testing and 150 images for validation in detecting melanoma using a CNN with 25 epochs.The accuracy of the proposed method was 74%. In [18], the dataset used with their proposedCNN method was randomly divided into 70%, 10%, and 20% for training, validation, and testing sets, respectively, with different learning rates between 0.2 and 0.9. The performance results or the sensitivity, specificity, The Positive Predictive Values (PPV), Negative Predictive Values (NPV), and accuracy were 95%, 92%, 84%, 95%, and 91%, respectively. In [17], the dataset used with their proposed approach consisted of 10 melanoma images and 727 benign images. The sensitivity results of their model were 89% and 100% for the benign and melanoma images, respectively, and their F-score results were 21% and 94% for the melanoma and benign images, respectively. In [19], their datasets were divided into 375 melanoma images and 1620 benign images for training and 30 melanoma images and 119 benign images for validation purposes, as well as 117 melanoma images and 481 benign images for testing, without applying any reduction data or augmentation process. Their specificity, sensitivity, accuracy, and balanced accuracy results were 96%, 82%, 90%, and 87%, respectively.

images

From this table, it can be observed that the results obtained by the proposed method were the best in terms of accuracy, specificity, and sensitivity. In addition, our results were obtained fromthe largest dataset of the compared studies except in [17]. However, their datasets included two skin cancer databases: DermIS Digital Database and Dermquest Database with 3 classifiers. This means that our results are more reliable in terms of scalability.

6  Conclusions

In this study, the proposed Conv2D method based on a deep learning CNN was implementedusing 3297 images provided by Kaggle. The proposed framework started with image preprocessing to extract the ROI images themselves, and then augmented some images to produce more data. The resulting data were trained using a CNN with many layers, including a separable Conv2D layer, activation (“ReLU”) layer, batch normalization layer, max pooling2D layer, and dropout layer to filter regions within the images that could contain skin lesions to detect the presence of melanoma. Testing the method produced promising results, with an accuracy of 0.94. In addition, our results were obtained from the largest dataset of most of the compared studies. This means that our results are more reliable in terms of scalability. In future work, we plan to investigate whether other deep learning techniques would further improve the accuracy results and other metrics.

Acknowledgement: The authors would like to thank the deanship of scientific research and Re-search Center for engineering and applied sciences, Majmaah University, Saudi Arabia, for their support and encouragement; the authors would like also to express deep thanks to our College (College of Science at Zulfi City, Majmaah University, AL-Majmaah 11952, Saudi Arabia) Project No. 31-1439.

Funding Statement: The work and the contribution were supported by Research Center for engineering and applied sciences and College of Science at Zulfi City, Majmaah University

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

 1.  M. S. Ali, M. S. Miah, J. Haque, M. M. Rahman and M. K. Islam, “An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models,” Machine Learning with Applications, vol. 5, no. 1, pp. 100036, 2021. [Google Scholar]

 2.  B. Krohling, P. B. Castro, A. G. Pacheco and R. A. Krohling, “A smartphone based application for skin cancer classification using deep learning with clinical images and lesion information,” ArXiv, vol. 1, no. 2, pp. 14353, 2021. [Google Scholar]

 3.  A. C. F. Gouabou, J.-L. Damoiseaux, J. Monnier, R. Iguernaissi, A. Moudafi et al., “Ensemble method of convolutional neural networks with directed acyclic graph using dermoscopic images: Melanoma detection application,” Sensors, vol. 21, no. 12, pp. 3999, 2021. [Google Scholar]

 4.  M. Dildar, S. Akram, M. Irfan, H. U. Khan, M. Ramzan et al., “Skin cancer detection: A review using deep learning techniques,” Environmental Research and Public Health, vol. 18, no. 10, pp. 5479, 2021. [Google Scholar]

 5.  J. A. A. Salido and C. Ruiz, “Using deep learning to detect melanoma in dermoscopy images,” Mach. Learn. Comput, vol. 8, no. 1, pp. 61–68, 2018. [Google Scholar]

 6.  M. H. A. Aljanabi, “Skin lesions detection using Meta-Heuristic method,” Biomedical Journal of Scientific and Technical Research (BJSTR), vol. 9, no. 2, pp. 1–5, 2018. [Google Scholar]

 7.  D. B. Mendes and N. C. da Silva, “Skin lesions classification using convolutional neural networks in clinical images,” Arxiv, vol. 1, no. 2, pp. 2316, 2018. [Google Scholar]

 8.  M. Sadeghi, P. K. Chilana and M. S. Atkins, “How users perceive content-based image retrieval for identifying skin images,” Understanding and Interpreting Machine Learning in Medical Image Computing Applications (Springer), vol. 11038, no. 1, pp. 141–148, 2018. [Google Scholar]

 9.  L. B. Maia, A. C. Lima, P. T. C. Santos, N. da Silva Lima, J. D. S. de Almeida et al., “Evaluation of melanoma diagnosis using imbalanced learning,” Anais do XVIII Simpósio Brasileiro de Computação Aplicada à Saúde (SBCS), vol. 18, no. 1, pp. 1–11, 2018. [Google Scholar]

10. S. Gehrmann, F. Dernoncourt, Y. Li, E. T. Carlson, J. T. Wu et al., “Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives,” PLOS ONE, vol. 13, no. 2, pp. 1–19, 2018. [Google Scholar]

11. J. Yang, X. Sun, J. Liang and P. L. Rosin, “Clinical skin lesion diagnosis using representations inspired by dermatologist criteria,” in Proc. CVPR, New Orleans, Louisiana, pp. 1258–1266, 2018. [Google Scholar]

12. H. Huang, P. Kharazmi, D. I. McLean, H. Lui, Z. J. Wang et al., “Automatic detection of translucency using a deep learning method from patches of clinical basal cell carcinoma images,” in Proc. APSIPA ASC, Chiang Mai, Thailand, pp. 685–688, 2018. [Google Scholar]

13. N. Raut, A. Shah and H. ShailVira, “A study on different techniques for skin cancer detection,” Engineering and Technology (IRJET), vol. 5, no. 9, pp. 614–617, 2018. [Google Scholar]

14. M. A. Ottom, “Convolutional neural network for diagnosing skin cancer,” Adv. Comput. Sci. Appl, vol. 10, no. 7, pp. 333–338, 2019. [Google Scholar]

15. M. Gaana, S. Gupta and N. S. Ramaiah, “Diagnosis of skin cancer melanoma using machine learning,” Social Science Research Network (SSRN), vol. 1, no. 1, pp. 3358134, 2019. [Google Scholar]

16. M. A. Kadampur and S. Al Riyaee, “Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images,” Informatics in Medicine Unlocked, vol. 18, no. 1, pp. 1–6, 2020. [Google Scholar]

17. L. Zhang, H. J. Gao, J. Zhang and B. Badami, “Optimization of the convolutional neural networks for automatic detection of skin cancer,” Open Medicine, vol. 15, no. 1, pp. 27–37, 2019. [Google Scholar]

18. H. Zunair and A. B. Hamza, “Melanoma detection using adversarial training and deep transfer learning,” Physics in Medicine & Biology, vol. 65, no. 13, pp. 1–11, 2020. [Google Scholar]

19. M. F. Jojoa Acosta, L. Y. Caballero Tovar, M. B. Garcia-Zapirain and W. S. Percybrooks, “Melanoma diagnosis using deep learning techniques on dermatoscopic images,” BMC Medical Imaging, vol. 21, no. 1, pp. 1–11, 2021. [Google Scholar]

20. L. Dang, P. Pang and J. Lee, “Depth-wise separable convolution neural network with residual connection for hyperspectral image classification,” Remote Sensing, vol. 12, no. 20, pp. 3408, 2020. [Google Scholar]

21. A. F. Agarap, “Deep learning using rectified linear units (relu),” ArXiv, vol. 2, no. 1, pp. 8375, 2018. [Google Scholar]

22. H. Qi, F. Zejia, D. Qi, S. Lei, C. Ming-Ming et al., “On the connection between local attention and dynamic depth-wise convolution,” in Proc. ICLR, La Jolla, pp. 8669–8679, 2021. [Google Scholar]

23. H. Zhu and H. Wan, “Sound event detection based on convolutional neural networks with overlapping pooling structure,” Physics: Conference Series (IOP), vol. 1924, no. 1, pp. 12008, 2021. [Google Scholar]

24. S. Siddharth, S. Simone and A. Anidhya, “Activations functions in neural networks,” Engineering Applied Sciences and Technology, vol. 4, no. 12, pp. 310–316, 2020. [Google Scholar]

25. Skin cancer: Malignant vs. Benign, processed skin cancer pictures of the ISIC archive. [Online]. Available: https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign. [Google Scholar]

26. J. A. Almaraz-Damian, V. Ponomaryov, S. Sadovnychiy and H. Castillejos-Fernandez, “Melanoma and nevus skin lesion classification using handcraft and deep learning feature fusion via mutual information measures,” Entropy, vol. 22, no. 4, pp. 484, 2020. [Google Scholar]

27. H. Dhahri, E. Al Maghayreh, A. Mahmood, W. Elkilani and M. Faisal Nagi, “Automated breast cancer diagnosis based on machine learning algorithms,” Healthcare Engineering, vol. 2, no. 1, pp. 1–11, 2019. [Google Scholar]

images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.