IoMT Enabled Melanoma Detection Using Improved Region Growing Lesion Boundary Extraction

: The Internet of Medical Things (IoMT) and cloud-based healthcare applications, services are beneficial for better decision-making in recent years. Melanoma is a deadly cancer with a higher mortality rate than other skin cancer types such as basal cell, squamous cell, and Merkel cell. However, detection and treatment at an early stage can result in a higher chance of survival. The classical methods of detection are expensive and labor-intensive. Also, they rely on a trained practitioner’s level, and the availability of the needed equipment is essential for the early detection of Melanoma. The current improvement in computer-aided systems is providing very encouraging results in terms of precision and effectiveness. In this article, we propose an improved region growing technique for efficient extraction of the lesion boundary. This analysis and detection of Melanoma are helpful for the expert dermatologist. The CNN features are extracted using the pre-trained VGG-19 deep learning model. In the end, the selected features are classified by SVM. The proposed technique is gauged on openly accessible two datasets ISIC 2017 and PH2. For the evaluation of our proposed framework, qualitative and quantitative experiments are performed. The suggested segmentation method has provided encouraging statistical results of Jaccard index 0.94, accuracy 95.7% on ISIC 2017, and Jaccard index 0.91, accuracy 93.3% on the PH2 dataset. These results are notably better than the results of prevalent methods available on the same datasets. The machine learning SVM classifier executes significantly well on the suggested feature vector, and the comparative analysis is carried out with existing methods in terms of accuracy. The proposed method detects and classifies melanoma far better than other methods. Besides, our framework gained promising results in both segmentation and classification phases.


Introduction
The human body has skin as a fast-developing organ. A layer of protection for humans protects all life in this world. It is the skin that envelops the human body and protects it from harm. Among the various kinds of cancer, skin cancer is significant cancer that affects light-skin tone people. Melanoma is a riskier form of skin cancer. Usually, it is affecting young people from the age of 15 to 29 [1]. According to the American Cancer Society (ACS) [2], in 2020, the expected death cases will be 6,850 out of 100,350, of which 4,610 are male and 2,240 are female. The disease's critical nature has forced the need to better discover skin cancers within a suitable time, removing the need for biopsy and better and reliable diagnostic results.
Skin cancer has been categorized into two groups: non-melanoma, and melanoma. The second category of skin cancer is selected in this research because it is deadly among the other two. In addition, it can quickly spread from one body part to another. However, persistence rates are higher if it is identified and diagnosed timely. There are two approaches for melanoma detection, the first is the clinical diagnosis-based approach, and the second is the computer-aided diagnosis system.
The clinical diagnosis-based approaches are categorized into two methods: 1) Biopsy and 2) Dermatologist analysis. The biopsy is an invasive method, and on the other hand, the dermatologist analysis process is non-invasive, but their accuracy is nearly 75% [3], which is not promising. In the biopsy method, the skin tissue sample is taken for a laboratory test. This melanoma detection test takes time, and it is painful. The patient can tolerate this one-time pain, but if melanoma cannot detect a timely or early stage, it becomes dangerous or sometimes the reason for patient death.
For melanoma detection, the dermatologists follow these methods: ABCDE, Seven, or three checkpoint lists. However, in some cases, where dermoscopic image contrast quality is low and lesion or healthy skin cannot differentiate properly, these methods failed to work accurately. Moreover, the sometimes delay in getting the time for a dermatologist and limited availability of expert dermatologists is also the reason for late melanoma detection [4]. Due to these causes, the need for computer-aided systems is very demanding and necessary to overcome the death rate of melanoma patients.
Skin cancer melanoma detection and identification is a very critical task for dermatologists. As a result, many computer-aided systems are developed for automatic melanoma detection and classification that facilitate dermatologists and patients. There are four significant steps for creating a computerized system: 1) Pre-processing, 2) Lesion segmentation, 3) Feature extraction and selection, and 4) Classification.
In medical image processing, the segmentation of skin lesion extraction is necessary for the primary analysis of melanoma from dermoscopic images [5]. Analysts for skin cancer melanoma detection have proposed several computer-aided systems [6]. A variety of approaches such as thresholdbased [7,8], region-based, edge-based, saliency map [9,10], convolutional neural network (CNN) and deep learning approaches are already implemented to solve the task of lesion segmentation. Thresholdbased approaches are mainly used as one of the simplest methods [11,12], which works well on high contrast images.
Region-based methods are also beneficial for where lesions and skin color are heterogeneous [13]. The seed-based approach is used in region-based techniques that combine the regions as per normal image information and comparative data of the adjacent pixels. The region-based approaches include region growing, J-Image Segmentation (JSEG) watershed [14], and Statistical Region Merging (SRM) [15]. The edge-based segmentation approaches principally utilized the edge's knowledge of input image and estimated the lesion boundary, and the post-processing techniques are also applied [16].
The efficient feature vector is helpful for classifiers to categorize the objects accurately. Different features like hand-crafted, pattern, histogram-based, and deep learning features are utilized for skin lesion classification. The hand-crafted features can now detect melanoma accurately because the skin lesion datasets contain various image varieties and different artifacts [17]. Deep learning is machine learning's new and robust field, which helps learn a higher level of data abstraction. Through several different algorithms, deep learning can be implemented. Deep learning is used to compute the hierarchical and higher-level features that cannot be efficiently extracted from traditional machine learning techniques [18,19].
The detection of malignant melanoma from benign melanoma is also very tough for the computeraided system [20]. Fig. 1 shows two images' columns; one is benign, meaning non-cancerous, and the next is malignant cancerous. Both side's images are similar. The human eyes are unable to identify which lesion is melanoma or benign [21]. Even the dermatologist suggests the patient for biopsy when they are not confirmed about the lesion. The biopsy method is itself painful and time taking. Moreover, challenging images include gel/bubbles, marker-ink, color-chart, ruler-marks, dark-corner, skin-hairs, and the last most crucial challenge, the hottest challenge, is low contrast images [22,23]. Melanoma is a fatal form of skin cancer that must be diagnosed early for effective treatment [24,25]. Melanoma affects a patient life even it can become a reason for death if its diagnosis is not accomplished on time. A rough pigment network and some suspicious signs do not help diagnose melanoma from dermoscopic images [26,27]. Hence, it is essential to develop an efficient and accurate method for analyzing skin lesions in big datasets, extracting the lesion boundary, and classifying lesions into 'Benign' and 'Melanoma' [28,29]. Therefore, from our proposed methods, the dermatologists and the patients will get the benefit. Even patients can avoid the painful biopsy test and save their money on other tests.
The main contributions of this research article are: 1) We enhanced the low contrast lesion area's quality by implementing the statistical novel intensity histogram method to calculate the positive and negative slope. 2) Proposed an improved region growing approach to detect the boundary of skin lesion dermoscopic images using convolutional filter and morphological operations.
3) The dataset PH2 has the minimum images among all classes, so data augmentation is applied to balance the levels.
4) The-trained deep visual geometry group (VGG-19) model to was suggested to extract features from segmented skin lesion images. Before the extraction of convolutional neural network (CNN) features, transfer learning is performed on the VGG-19 pre-trained model.
The rest of this article is prepared as follows. In Section 2, we define the proposed methodology in detail. Section 3 evaluates and analyzes the proposed method. We summarized our fundamental research based on experimental results in Section 4. Finally, In Section 5 we conclude our work.

Proposed Methodology
The proposed methodology has four phases, as defined below in Fig

Pre-Processing
Pre-processing is the mandatory step for every computer-aided system [30]. The pre-processing performs the essential task of improving image quality and eliminating unnecessary objects from an image. Different algorithms like histogram-based methods, morphological-based methods, and softcomputing-based methods are used to enhance the quality of low contrast images [31].
The acquisition of diverse dermoscopic images from two different datasets does not appropriate for the proposed segmentation process. Therefore, before process the image into the segmentation phase, it is mandatory to pre-process the input image. Therefore, the pre-processing phase is essential to obtain high accuracy in the subsequent phases, especially segmentation. In addition, the ISIC dataset contains a variety of dermoscopic images in which low contrast images are challenging to handle.
Here, the contrast enhancement technique is implemented by an intensity histogram. In this technique, the dominant level (local maxima) is calculated, then contrast is equally distributed to the dermoscopic image. First, the histogram of every image is created to estimate the local minima of the histogram slope. The slopes (positive and negative) are graphically described in Fig. 3. The positive slope is the high points of x and y. The negative slope is the decreased points of x and y. The positive slope (PS) as defined in Eq. (2); negative slope (NS) as described in Eq. (2) are calculated. Here, delta y ( y) and delta x ( x) is the difference in slope ending value (y2, x2) and slope initial value (y1, x1). After evaluating the positive and negative slope values, the value of local minima is extracted by applying the condition where x (intensity levels) and y (number of pixels with each intensity level) are greater than 0; then these values are under the positive slope, and local minima (LM) values are described in Eq. (3).
The results are flawless as shown in Fig. 4 before contrast enhancement (original image) and after contrast enhancement. In addition, the visibility of low contrast images is now transparent after performing the contrast enhancement technique.

Skin Lesion Segmentation
In phase 2, the pre-processed image is used for the segment of the lesion. The improved regiongrowing technique is applying for the extraction of the boundary of the lesion. In the suggested method, the preliminary seed mask is created using the region window size 50 * 50. Here, two conditions apply if the adjacent pixel has the same color value and is not a part of another cluster or region. First, this pixel value is added to the cluster or region. This process is continuing until then, change in the pixel value.
After this step, the convolution filter is applied to extract boundary of the lesion. Using this operation is to blur the image that is smooth, and no edge is observed. After that, a re-thresholding value is applied to it to refine the lesion's current boundary edges. Finally, the improved segmented binary image is achieved after performing the morphological dilation operation. The proposed lesion segmentation method steps are graphically demonstrated in Fig. 5. In the traditional region growing method, only completed the contouring by adding similar neighbor pixels. In our proposed improved region growing method, the convolution filter and re-threshold value are applied.
The detailed process flow algorithm cycle is explaining below: Allocate the Initial Mask to 50 * 50. 3: Using this seed mask, the gradient of its adjacent pixels in the image is defined. 4: For individual pixel value in the gradient of adjacent pixels. 5: Do 6: If the pixel value is a part of another cluster or region (P i ) and has a similar/equal or less (P j ) value to the M, 7: Then add the pixel value into the corresponding cluster. Check the next value of the pixel.

11:
Repeat until all pixels' values in the adjacent gradient are checked 12: Convolutional filter (blurring the segmented image). 13: Edge refinements using Re-thresholding. 14: Filling operation. 15: Closing operation. 16: Perform a morphological dilation operation using four disk structure elements.

Features Extraction and Selection
One of CNN's architectures is VGG19-Net, which is, among others, is used extensively, and has good documentation [32]. This Convolution Net has been favored over others due to the excellent performance in the ImageNet dataset. The two publicly available variations are 16 and 19 weight layers due to their superior outcome over other variations [33].
In this article, the selection for the VGG-19 architecture explains in Fig. 6 because it generalizes better to other datasets. The network input layer is required that a 224 × 224-pixel RGB image is inputted. The input image goes through 5 convolutional blocks. The 3 × 3 receptive field of small convolutional filters is used. A 2D convolution layer operation (the quantity of filters changes among blocks) is contained in each convolutional block. The ReLU is included in each hidden layer as the activation function layer (nonlinearity operation), and it has spatial pooling through the feeding of a max-pooling layer. The network ends with a classifier block containing three FC layers. The 70% to 30% (70:30) ratio is selected between training and testing data. To train the network, the number of images is available to datasets is insufficient. So, to overcome this problem, the transfer learning technique is employed on the VGG-19 pre-trained model. Also, the data augmentation is applied for the class melanoma that having a smaller number of images. The cancer type 'Malignant Melanoma' has the minimum images, so the data augmentation is applied to balance the classes.
After that, 4096 features are extracted from the 7th fully connected layer of the neural network. Then apply the feature selection entropy method is utilized. Finally, the 2000 features are selected to classify the skin lesion images.

Lesion Classification
The classification is performed in 2000, selected 'Deep CNN' features. Two publicly online available datasets, ISIC 2017 and PH2, are utilized for analysis. The machine learning classifier SVM is being used to categorize these two classes: Melanoma and benign.

Experimental Setup and Datasets
The proposed framework is implemented using MATLAB 2019a on windows 10 home edition. The hardware used in this approach has the following specifications: 2.8 GHz Intel (R) Core (TM) i5-8250U CPU with the 64-bit operating system, 12.0 GB RAM, and NVIDIA GeForce 940MX graphic card. The proposed framework for lesion detection and melanoma recognition is assessed on two freely online available dermoscopic datasets, including ISIC 2017 and PH2 [34]. The ISIC 2017 dataset contains 2,600 dermoscopic images. The resolution of these images is 296 by 1456. The dataset has 2,109 non-melanoma images, and 491 are melanoma images shown in Tab. 1. A total of 200 images are accessible in the PH2 dataset. The resolution of 8-bit images is 768 * 660. In PH2, there are two categories: 1) non-melanoma (benign, common nevi), and 2) Melanoma. The detailed dataset division is shown in Tab. 2. In addition, the images extracted by expert dermatologists, known as ground truth images, are also available for the validation of the segmentation phase.

Skin Lesion Segmentation Results
The segmentation results are analyzed by comparing the ground truth images and statistical performance measures used to assess the proposed segmentation method.

Qualitative Experiment
For the qualitative experiments, the ground truth images are used to compare the similarity between expert segmented images and our proposed method segmented images. First, some random images are selected from the ISIC 2017 dataset and represent these images graphically, as shown in Fig. 7. In column (a), the original images from dataset ISIC 2017 are listed. The second column (b) described the Pre-processing step in which the contrast enhancement method is applied. In the third column (c), the binary segmented images are presented. The segmented images are shown in the fourth column (d) after applying the proposed segmentation method. The last column (e) demonstrated the comparison between the segmented images with ground truth images. The green boundary describes the ground truth image, and the blue boundary defines the segmented image. Here, it can be observed that the segmented images achieved from the proposed segmentation method are much nearer to the ground truth images. Also, for the PH2 dataset, roughly images are chosen, and the visual presentation of these images is shown in Fig. 8. Like the ISIC 2017 dataset, the steps from (a) to (e) are also performed for the PH2 dataset. In the end column (e), the comparison between our method with ground truth images shows that the proposed segmentation method similarly works well on the PH2 dataset.

Quantitative Experiment
The accuracy (AC), Jaccard index (JI), dice index (DI), precision (Prec), and recall (Rec) these performance measures are used for the evaluation of the segmentation phase as presented in equations from Eqs. (5) to (9). Here, in equations, the TP is the true positive rate, TN is the true negative rate, FP is the false positive rate, and FN is the false-negative rate.
The experiment is performed on the ISIC 2017 dataset, which contains a total of 2000 images. In Tab. 3, the top 10 images result are given. The overall average accuracy achieved is 95.74%. Correspondingly, the PH2 dataset detailed segmentation results are also specified in Tab. 4. The experiment is performed on a total of 200 images of the dataset PH2. However, the results are presented in a few images. The overall average accuracy is 95.41%.

Skin Lesion Classification Results
After analyzing the segmentation results, the classification performance is evaluated using statistical performance measures as Precision, Sensitivity, Specificity, and Accuracy. The machine learning classifiers Fine-Tree, Cubic-SVM, and Fine-KNN, are selected, which gave the best accuracies. For example, it can be seen in Tab. 5; the selected features showed the accuracy of 95.1% by Fine-Tree classifier, 95.9% on Fine-KNN classifier, and 96.9% on Cubic-SVM classifier using dataset ISIC 2017. Equations in display format are separated from the paragraphs of the text. The performance of Cubic-SVM is also verified by the confusion matrix, as presented in Fig. 9. The machine learning classifiers were also applied to the PH2 dataset. As can see in Tab. 6. The Cubic-SVM gave efficient accuracy, which is 96.4%. The performance of Cubic-SVM can be validated from the confusion matrix, as illustrated in Fig. 10. The diagonal part in green color in the confusion matrix shows that the lesion true class percentage is 94% benign and 99% melanoma in Fig. 9 and 96% atypical nevus, 93% common nevus, and 100% melanoma in Fig. 10. The pink class color defined the false that is not classified accurately.

Discussion
The proposed framework for detecting skin cancer melanoma has four phases, as shown in Fig. 2. In the first phase, contrast enhancement was performed on all datasets images to increase the contrast of skin lesion images. In the second phase, the proposed segmentation algorithm is explained step by step. In the third phase, the deep features are extracted to classify the lesion into benign and melanoma. In the last phase, the classification is performed by utilized the machine learning classifiers. In Section 3, the segmentation results are presented in terms of graphically and tabular form. The two free online available datasets are used for lesion segmentation and classification.
The proposed segmentation method is compared with the existing techniques using the datasets ISIC 2017 and PH2, as defined in Tab. 7. Bi et al., [35] utilized the hand-crafted features and joint reverse classification for lesion categorization. As a result, they achieved 92.0% accuracy on the PH2 dataset. Gutiérrez-Arriola et al. [36] presented a method based on pre-processing, and through ISIC 2017, it gained 91.0% accuracy. Navarro et al., [37] implemented the super pixel-based segmentation method. On the dataset ISIC 2017, their approach reached 85.4% accuracy. The deep learning pretrained architecture VGG16 and ResNet are used in [38], and through the deep features, the achieved accuracy is 93.8%. Moreover, the deep learning architecture Inception is used in [39] their fused features gained 94.7% accuracy. Thus, the proposed method attained the accuracies of 95.0% on ISIC 2017 and 93.0% on the PH2 dataset. Qualitative and several quantitative methods assess lesion segmentation and classification performance. Our proposed framework achieved specificity as 0.963, sensitivity as 0.964, accuracy (AC) as 0.96. Besides, for segmentation, the average dice index (DI) results were verified as 0.98, which signified segmentation efficient performance. Furthermore, the comparative analysis by the state-ofart approaches and the outcomes of experiments have revealed the proposed framework's dominance.
However, in some cases, the proposed segmentation method fails to extract the lesion from healthy skin. Some of the failure case images from ISIC 2017 are shown in Fig. 11. These cases are because the color of the lesion is very close to skin tone color. Medical image processing has suggested many solutions to support dermatologists in extracting skin lesion boundaries and classification. In this article, an improved region growing method is implemented to segmentation skin lesions from dermoscopic images. The deep learning VGG-19 model is implemented for the extraction of high-level features. Moreover, entropy selection is performed for the selection of unique features. The selected features are future consumed for classification by SVM. The proposed technique is demonstrated on two freely available datasets, ISIC 2017 and PH2. It is determined that the machine learning SVM classifier performs considerably well with proposed deep features. Different kinds of skin lesion images or pictures may be taken from a mobile or the internet in the future. Finally, these images will be utilized for the segmentation and classification of skin lesions. Moreover, for classification, other training classifiers can also be explored for skin lesion categorization.
Acknowledgement: This work is also supported by Artificial Intelligence and Data Analytics (AIDA) Lab CCIS Prince Sultan University Riyadh, Saudi Arabia and authors would also like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges (APC) of this publication. Also, this work is supported by the School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru, Malaysia. Moreover, the first author is also grateful for the support of the Department of Computer Science, Lahore College for Women University, Jail Road, Lahore 54000, Pakistan.
Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.