iconOpen Access

ARTICLE

Sailfish Optimization with Deep Learning Based Oral Cancer Classification Model

Mesfer Al Duhayyim1,*, Areej A. Malibari2, Sami Dhahbi3, Mohamed K. Nour4, Isra Al-Turaiki5, Marwa Obayya6, Abdullah Mohamed7

1 Department of Computer Science, College of Sciences and Humanities-Aflaj, Prince Sattam bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
2 Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
3 Department of Computer Science, College of Science & Art at Mahayil, King Khalid University, Abha, 62529, Saudi Arabia
4 Department of Computer Science, College of Computing and Information System, Umm Al-Qura University, Mecca, 24382, Saudi Arabia
5 Department of Information Technology, College of Computer and Information Sciences, King Saud University, Riyadh, 4545, Saudi Arabia
6 Department of Biomedical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
7 Research Centre, Future University in Egypt, New Cairo, 11845, Egypt

* Corresponding Author: Mesfer Al Duhayyim. Email: email

Computer Systems Science and Engineering 2023, 45(1), 753-767. https://doi.org/10.32604/csse.2023.030556

Abstract

Recently, computer aided diagnosis (CAD) model becomes an effective tool for decision making in healthcare sector. The advances in computer vision and artificial intelligence (AI) techniques have resulted in the effective design of CAD models, which enables to detection of the existence of diseases using various imaging modalities. Oral cancer (OC) has commonly occurred in head and neck globally. Earlier identification of OC enables to improve survival rate and reduce mortality rate. Therefore, the design of CAD model for OC detection and classification becomes essential. Therefore, this study introduces a novel Computer Aided Diagnosis for OC using Sailfish Optimization with Fusion based Classification (CADOC-SFOFC) model. The proposed CADOC-SFOFC model determines the existence of OC on the medical images. To accomplish this, a fusion based feature extraction process is carried out by the use of VGGNet-16 and Residual Network (ResNet) model. Besides, feature vectors are fused and passed into the extreme learning machine (ELM) model for classification process. Moreover, SFO algorithm is utilized for effective parameter selection of the ELM model, consequently resulting in enhanced performance. The experimental analysis of the CADOC-SFOFC model was tested on Kaggle dataset and the results reported the betterment of the CADOC-SFOFC model over the compared methods with maximum accuracy of 98.11%. Therefore, the CADOC-SFOFC model has maximum potential as an inexpensive and non-invasive tool which supports screening process and enhances the detection efficiency.

Keywords


1  Introduction

Oral cancer (OC) is a lethal disease highly related to mortality and morbidity, and it comes under the neck and head sections [1]. Numerous image processing systems are widely utilized for the earlier diagnosis of OC that results in increased cancer survival rate and greater treatment efficiency. Medical imaging method, computer-aided detection, and diagnosis makes potential change in cancer treatment, now it can be diagnosed in the earlier stage by analyzing magnetic resonance imaging (MRI) scans, X-ray and computed tomography (CT) images. It helps to easily examine the anatomical structure of oral cavity and allows to precisely extract healthy regions from tumor areas. Defining the accurate class of OC at earlier stages is a considerably difficult task [2]. Thus, computer aided application would be extremely advantageous as it helps the medical doctor to offer a comprehensive treatment process and has a classification of diseases in healthcare diagnosis process.

Conventionally, cancer treatment mainly depends upon the grading of tumors. But the grading and discrepancy have added to imprecise prognosis in OC patients [3]. Despite the rising amount of predictive markers, the entire disease prediction remains same [4]. It is due to the challenge in the incorporation of this marker in the present staging scheme [5]. Better diagnostic and prognostic accuracy assists the clinician in making decisions based on the proper treatment for survival [6]. Eventually, machine learning (ML) technique (shallow learning) has been reported to provide better prognostication of OC. Note that, the usage of ML technique has been reported to offer a better prognostication when compared to the conventional statistical analysis [7]. The ML technique can exhibit promising outcomes since it can discern the complicated relations among the variables included in the dataset [8]. Considering the touted feasibility and advantage of the ML approaches in cancer prognostication, the application has gained considerable interest over the last few decades. Because of that, it is poised to help the clinician in taking decisions thus promoting and improving good management of patient health. Interestingly, the advanced technology has resulted in the alteration of shallow ML to deep ML. Deep learning (DL) technique has been touted to increases better management of cancer [9,10].

Song et al. [11] present for addressing this shortcoming by employing a Bayesian deep network able to evaluate uncertainty for assessing OC image classifier reliability. It can be estimated the method utilizes a huge intraoral cheek mucosa image data set captured utilizing our customized device in high-risk populations for demonstrating that meaningful uncertainty data is created. Tanriver et al. [12] discovered the potential application of computer vision and DL approaches from the OC field in the scope of photographic image and examined the prospect of automated model to identify oral potentially malignant disorder with 2-stage pipeline. Camalan et al. [13] established a DL approach for classifying images as “suspicious” and “normal” and for highlighting the region of image most probably that contained from decision-making with creating automated heat map. The author has established a model for classifying images as healthy and abnormal with executing transfer learning (TL) on Inception-ResNet-V2 and created automated heat map for highlighting the area of image most probable that contained from the decision making.

Lim et al. [14] established a new DL structure called as D’OraCa for classifying oral lesions utilizing photographic images. It can be primary for developing a mouth landmark recognition method to the oral image and integrating it as to the oral lesion classifier method as guidance for improving the classifier accuracy. It can be measured the efficacy of 5 distinct deep convolutional neural network (DCNN) and MobileNetV2 is selected as the feature extracting to presented mouth landmark recognition method. Lin et al. [15] projected an effectual smartphone based image analysis approach, influenced by a DL technique, for addressing the challenge of automatic recognition of oral disease. Primary, an easy yet effectual centered rule image capture method has been presented to gather oral cavity images. Afterward, dependent upon this approach, a medium-sized oral data set with 5 categories of diseases has been generated, and resampling approach has been projected to lessen the result of images variability in hand held smartphones camera. At last, an existing DL network (HRNet) has been established for evaluating the performance of our approach for OC recognition.

This study introduces a novel Computer Aided Diagnosis for OC using Sailfish Optimization with Fusion based Classification (CADOC-SFOFC) model. The proposed CADOC-SFOFC model performs fusion based feature extraction process using VGGNet-16 and Residual Network (ResNet) model. Besides, feature vectors are fused and passed into the extreme learning machine (ELM) model for classification process. Moreover, SFO algorithm is utilized for effective parameter selection of the ELM model, consequently resulting in enhanced performance. The experimental analysis of the CADOC-SFOFC model was tested on Kaggle dataset and the results reported the betterment of the CADOC-SFOFC model over the compared methods.

2  Materials and Methods

In this study, a novel CADOC-SFOFC model has been devised to determine the existence of OC on medical images. Initially, the CADOC-SFOFC model carried out the fusion based feature extraction procedure using VGGNet-16 and ResNet model. In addition, feature vectors are fused and passed into the ELM model for classification process. Finally, the SFO algorithm is utilized for effective parameter selection of the ELM model as illustrated in Fig. 1.

images

Figure 1: Block diagram of CADOC-SFOFC model

2.1 Dataset Used

The performance validation of the CADOC-SFOFC model on OC classification is performed using a benchmark dataset from Kaggle repository (available at https://www.kaggle.com/shivam17299/oral-cancer-lips-and-tongue-images). The dataset includes lip and tongue images with two class labels. A total of 87 images come under cancer class and 44 images under non-cancer class. Fig. 2 depicts a sample set of tongue images.

images

Figure 2: Sample images

2.2 Image Pre-Processing

Gabor filter (GF) is initially employed to preprocess the input images. It is an oriented complicated sinusoidal grating modified using 2-D Gaussian envelope. For a 2-D coordinate (a,b) model, the GF comprises real as well as imaginary components, as given in Eq. (1):

Gδ,θ,ψ,σ,γ(a,b)=exp(a2+γ2b22σ2)×exp(j(2πaδ+ψ)) (1)

where

a=acosθ+bsinθ (2)

b=asinθ+bcosθ (3)

where δ indicates wavelength and θ implies orientation separation angle of Gabor kernel, ψ denotes phase offset, σ represents standard derivation of Gaussian envelope, and γ is the spatial aspects ratio.

2.3 Feature Extraction

In this study, two feature vectors namely Visual Geometry Group (VGG16) and ResNet models are applied [16,17].

2.4 Feature Fusion and Classification

At this stage, the fusion of features is carried out into a matrix by the use of partial least square (PLS) based fusion model [18]. Assume ηSw(1) and ηSw(2) denotes a pair of chosen feature vectors of dimension X1×K and X2×K . Assume ηSw(j) as a fused vector of dimensions X3×K . Besides, the central parameters U and V indicates zero mean, where UηSw(1) and VηSw(2) . Consider δuv=UV and δvu=δuvT((1n1)δuv) represents set covariance amongst vectors U and V . The PLS holds correlated features to fuse them. The fusion procedure via PLS reduces the number of predictors. The decomposition model between U and V can be represented using Eqs. (4) and (5):

U=i=dηiηSw(1i)T=E (4)

V=i=dηiηSw(2i)T=F (5)

In case of using PLS, two directions amongst ui and vi are obtained as given below:

{ui,vi}=argmaxuTu=vTv=1Cov(UTu,VTv) (6)

{ui,vi}=argmaxuTu=vTv=1uTδuvv,fori=1,2,3,da=1 (7)

They are integrated into single matrix and resultant vector was gained with X3×K dimension. The fused vector can be denoted as ηSw(j) . Then, they are fed into the ELM model to classify them. It can be formulated as follows. The structure of ELM is shown in Fig. 3. Consider L hidden layer nodes, the activation function g(x) can be denoted as follows [19]:

i=1Lβigi(ui)=j=1Lβig(ui.uj+Bi) (8)

βT=O (9)

where L signifies hidden layer, βi indicates output weight vector, ui represents input weight vector approaching hidden layer, Bi implies offset value, H denotes output hidden layer node, ui.uj represents inner product of ui , and O implies predicted outcome. Eq. (19) can be rewritten as follows:

β^ELM=argminββTHOβ (10)

For enhancing the stableness of the ELM model, the minimization function can be provided using Eq. (11):

minw12β+12ci=1Nϵi2 s.t. βTh(ui)=tiei (11)

where ϵi signifies training error, ti specifies equivalent labels to the samples ui , and c represents penalty variable.

images

Figure 3: Structure of ELM model

2.5 Parameter Tuning

At the final stage, the SFO algorithm is utilized for effective parameter selection of the ELM model, consequently resulting in enhanced performance. The SFO is a new nature-inspired metaheuristic approach which is demonstrated once a set of hunting sailfish (SF) [20]. It demonstrates optimal efficacy associated with common metaheuristic approach. In the SFO approach, it is considered as SF is candidate solution and the place of SF in the exploration region signifies the parameter of issue. The position of ith SF from the kth searching iteration is characterized as SFi,k , and the corresponding fitness is estimated by f(SFi,k) . The sardines are also crucial contributors to the SFO method. It is regarded as school of sardines which move from the searching region. The location of ith sardine was illustrated as Si , and the corresponding fitness was calculated by f(Si) . In the SFO method, the SF possesses the optimal location that has been selected as leading SF that affects the acceleration and manoeuvrability of sardines under attack. Moreover, the position of injured sardine in each round is selected as an optimal location for cooperative hunting in SF. The algorithm aims at avoiding beforehand removal solution. injured sardines and Elite SF are YnewSFi denoted in the following equation:

YnewSFi=YeliteSFiλi×(random(0,1)×(YeliτeSFiYinjuredSi2)YcurrentSFi) (12)

whereas YcurrenTSFi indicates the existing location of SF and arbitrary (0,1) denotes the random value ranges within [0–1].

The parameter λi describes the coefficient from the ith iteration and values are given by:

λi=2×rand(0,1)×SDSD (13)

In which SD represents the sardine density that indicates the quantity of sardines in each iteration. The parameter SD is given by:

SD=1(NSFNSF+NS) (14)

Here NSF and NS signifies the quantity of SF and sardines. At first, the hunt, SF is energetic, in addition, sardines aren’t injured or tired. The sardines are quickly escaped. However, with nonstop hunting, the strength of SF attack was reduced gradually. Meanwhile, the sardines are tired, and also the alertness of the position of SF is minimized. Therefore, the outcomes, the sardines are hunted. As per the algorithmic process, a novel location of sardine YnewSi denoted in the following:

YnewSi=random(0,1)×(YeliteSFiYoldSi+ATP) (15)

Now YoldSi denotes the older location of sardine and arbitrary (0,1) characterizes the arbitrary value ranges within [0–1]. ATP indicates the SF attack power. The parameter ATP is evaluated by:

ATP=B×(1(2×ltr×ϵ)) (16)

whereas B and ϵ indicate the coefficient that is employed for minimizing the attack power within [B-0] and Itr indicates the amount of iterations. While the attack power of SF minimized the hunting time, this decreases the convergence rate. When ATP is high, that is, greater than 0.5, the location of each sardine is upgraded. On the other hand, α sardines with β variable upgrade their locations. The amount of sardines upgraded the location is described by:

α=NS×ATP (17)

Then, NS shows the amount of sardines in each iteration. The amount of parameters of the sardines upgraded the place can be accomplished by:

β=di×ATP (18)

In which di characterizes the amount of parameters from the ith iteration. When the sardine was hunted, the fitness is greater than the SF. Here, the location of SF YSFi is upgraded by newest location of hunted sardine YSi for hunting novel sardine. It can be expressed by:

YSFi=YSiiff(Si)<f(SFi) (19)

For adjusting the ELM parameters, the SFO algorithm computes a fitness function for accomplishing maximum classifier results. It derives a fitness function using the error rate and the fitness value should be as low as possible. It can be defined as follows.

fitness(xi)=ClassifierErrorRate(xi)=numberofmisclassifiedsamplesTotalnumberofsamples100 (20)

3  Results and Discussion

In this section, the experimental validation of the CADOC-SFOFC model is performed using benchmark dataset from Kaggle repository. A set of four confusion matrices achieved by the CADOC-SFOFC model on distinct sizes of training/testing (TR/TS) data is illustrated in Fig. 4. With TR/TS data of 90:10, the CADOC-SFOFC model has recognized 4 instances under cancer and 9 instances under non-cancer classes.

images

Figure 4: Confusion matrices of CADOC-SFOFC model

At the same time, with TR/TS data of 80:20, the CADOC-SFOFC model has recognized 18 instances under cancer and 8 instances under non-cancer classes. Followed by, with TR/TS data of 70:30, the CADOC-SFOFC model has recognized 25 instances under cancer and 13 instances under non-cancer classes. Lastly, with TR/TS data of 60:40, the CADOC-SFOFC model has recognized 38 instances under cancer and 14 instances under non-cancer classes.

Tab. 1 and Fig. 5 exhibits detailed OC classification outcomes of the CADOC-SFOFC model on distinct sizes of TR/TS data. The experimental outcomes implied that the CADOC-SFOFC model has gained effectual outcomes on various TR/TS data.

images

images

Figure 5: OC classification of CADOC-SFOFC model under distinct TR/TS data

For instance, with TR/TS of 90:10, the CADOC-SFOFC model has classified cancer images with acccuy , precn , recal , specy, and Fscore of 92.86%, 100%, 80%, 100% and 88.89% respectively. In line with, under TR/TS of 80:20, the CADOC-SFOFC model has classified cancer images with acccuy , precn , recal , specy, and Fscore of 96.30%, 94.74%, 100%, 88.% and 88.89% respectively. Moreover, with TR/TS of 70:30, the CADOC-SFOFC model has classified cancer images with acccuy , precn , recal , specy, and Fscore of 95%, 100%, 92.59%, 100% and 96.15% respectively. At last, with TR/TS of 60:40, the CADOC-SFOFC model has classified cancer images with acccuy , precn , recal , specy, and Fscore of 98.11%, 97.44%, 100%, 93.33% and 98.70% respectively. Moreover, with TR/TS of 80:20, the CADOC-SFOFC model has provided average acccuy , precn , recal , specy, and Fscore of 96.30%, 97.37%, 94.44%, 94.44% and 95.71% respectively. Furthermore, with TR/TS of 70:30, the CADOC-SFOFC model has provided average acccuy , precn , recal , specy, and Fscore of 95%, 93.33%, 96.30%, 96.30% and 95.71% respectively.

Fig. 6 demonstrates a clear training and validation accuracies of the CADOC-SFOFC model on test dataset. The training and validation accuracies are measured under varying numbers of epochs. It is exhibited that the CADOC-SFOFC model has gained increased values of training and validation accuracies.

images

Figure 6: Training and validation accuracies of CADOC-SFOFC model

Fig. 7 validates the training and validation losses of the CADOC-SFOFC model on test dataset. The training and validation losses can be determined with a rising number of epochs. It is displayed that the CADOC-SFOFC model has extended reduced values of training and validation losses.

images

Figure 7: Training and validation losses of CADOC-SFOFC model

Fig. 8 highlights the ROC curves of the CADOC-SFOFC model obtained under distinct sizes of TR/TS data. The figures reported that the CADOC-SFOFC model has accomplished effectual OC classification under two classes namely cancer and non-cancer. It is also noticed that the CADOC-SFOFC model has gained maximum ROC values under two classes.

images

Figure 8: Precision-recall curves of CADOC-SFOFC model under distinct TR/TS data

Fig. 9 demonstrates the precision-recall curves of the CADOC-SFOFC model attained under dissimilar sizes of TR/TS data. The results represented that the CADOC-SFOFC model has reached maximum OC classification under two classes namely cancer and non-cancer. It is observed that the CADOC-SFOFC model has showcased improved precision-recall values under two classes.

images

Figure 9: ROC of CADOC-SFOFC model under distinct TR/TS data

For assessing the enhanced outcomes of the CADOC-SFOFC model, a comparison study with recent models [2123] is made in Tab. 2 and Fig. 10. The results implied that the ADCO-DL, Inception-v4, and DenseNet models have reached lower OC classification results. Followed by, the artificial neural network (ANN)-support vector machine (SVM) and C-Net models have tried to showcase moderately improved classification outcomes. Though the random forest (RF) model has accomplished reasonable OC classification results with acccuy , precn , recal , and Fscore of 97.09%, 92.34%, 93.86%, and 94.09%, the CADOC-SFOFC model has surpassed existing methodologies with maximum acccuy , precn , recal , and Fscore of 98.11%, 98.72%, 96.67%, and 97.63% respectively.

images

images

Figure 10: Comparative analysis of CADOC-SFOFC model with recent approaches

The enhanced performance of the CADOC-SFOFC model is due to the inclusion of SFO based parameter optimization process. After investigating the above mentioned results and discussion, it can be ensured that the CADOC-SFOFC model has the ability to outperform the other methods with improved OC classification outcomes.

4  Conclusion

In this study, a novel CADOC-SFOFC model has been devised to determine the existence of OC on medical images. Initially, the CADOC-SFOFC model carried out the fusion based feature extraction procedure using VGGNet-16 and ResNet model. In addition, feature vectors are fused and passed into the ELM model for classification process. Finally, the SFO algorithm is utilized for effective parameter selection of the ELM model, consequently resulting in enhanced performance. The experimental analysis of the CADOC-SFOFC model was tested on Kaggle dataset and the results reported the betterment of the CADOC-SFOFC model over the compared methods. Therefore, the CADOC-SFOFC model has maximum potential as an inexpensive and non-invasive tool which supports screening process and enhances the detection efficiency. In future, the detection efficiency can be improvised by the design of advanced DL based classifier models.

Funding Statement: The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number (RGP 2/142/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R151), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: 22UQU4310373DSR13. This research project was supported by a grant from the Research Center of the Female Scientific and Medical Colleges, Deanship of Scientific Research, King Saud University.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. Y. Kim, J. W. Kang, J. Kang, E. J. Kwon, M. Ha et al., “Novel deep learning-based survival prediction for oral cancer by analyzing tumor-infiltrating lymphocyte profiles through CIBERSORT,” OncoImmunology, vol. 10, no. 1, p. 1904573, 202
  2. S. B. Khanagar, S. Naik, A. A. A. Kheraif, S. Vishwanathaiah, P. C. Maganur et al., “Application and performance of artificial intelligence technology in oral cancer diagnosis and prediction of prognosis: A systematic review,” Diagnostics, vol. 11, no. 6, p. 1004, 2021.
  3. M. P. Kirubabai and G. Arumugam, “Deep learning classification method to detect and diagnose the cancer regions in oral MRI images,” Medico-Legal Update, vol. 21, no. 1, p. 462, 2021.
  4. R. Dharani and S. Revathy, “DEEPORCD: Detection of oral cancer using deep learning,” in Journal of Physics: Conf. Series, Int. Conf. on Innovative Technology for Sustainable Development 2021 (ICITSD 2021), Chennai, India, vol. 1911, p. 012006, 2021.
  5. X. R. Zhang, X. Sun, W. Sun, T. Xu and P. P. Wang, “Deformation expression of soft tissue based on BP neural network,” Intelligent Automation & Soft Computing, vol. 32, no. 2, pp. 1041–1053, 2022.
  6. X. R. Zhang, J. Zhou, W. Sun and S. K. Jha, “A lightweight CNN based on transfer learning for COVID-19 diagnosis,” Computers Materials & Continua, vol. 72, no. 1, pp. 1123–1137, 2022.
  7. R. K. Gupta and J. Manhas, “Improved classification of cancerous histopathology images using color channel separation and deep learning,” Journal of Multimedia Information System, vol. 8, no. 3, pp. 175–182, 2021.
  8. G. E. V. Landivar, J. A. B. Caballero, D. H. P. Moran, M. A. Q. Martinez and M. Y. L. Vazquez, “An analysis of deep learning architectures for cancer diagnosis,” in CIT 2020: Artificial Intelligence, Computer and Software Engineering Advances, Advances in Intelligent Systems and Computing Book Series. Vol. 1326. Cham: Springer, pp. 19–33, 2021.
  9. J. H. Yoo, H. G. Yeom, W. Shin, J. P. Yun, J. H. Lee et al., “Deep learning based prediction of extraction difficulty for mandibular third molars,” Scientific Reports, vol. 11, no. 1, p. 1954, 2021.
  10. K. B. Bernander, J. Lindblad, R. Strand and I. Nyström, “Replacing data augmentation with rotation-equivariant CNNs in image-based classification of oral cancer,” in Iberoamerican Congress on Pattern Recognition, CIARP 2021: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Lecture Notes in Computer Science Book Series. Vol. 12702. Cham: Springer, pp. 24–33, 2021.
  11. B. Song, S. Sunny, S. Li, K. Gurushanth, P. Mendonca et al., “Bayesian deep learning for reliable oral cancer image classification,” Biomedical Optics Express, vol. 12, no. 10, p. 6422, 2021.
  12. G. Tanriver, M. S. Tekkesin and O. Ergen, “Automated detection and classification of oral lesions using deep learning to detect oral potentially malignant disorders,” Cancers, vol. 13, no. 11, pp. 2766, 2021.
  13. S. Camalan, H. Mahmood, H. Binol, A. L. D. Araújo, A. R. S. Silva et al., “Convolutional neural network-based clinical predictors of oral dysplasia: Class activation map analysis of deep learning results,” Cancers, vol. 13, no. 6, p. 1291, 2021.
  14. J. H. Lim, C. S. Tan, C. S. Chan, R. A. Welikala, P. Remagnino et al., “D’OraCa: deep learning-based classification of oral lesions with mouth landmark guidance for early detection of oral cancer,” in Annual Conf. on Medical Image Understanding and Analysis, MIUA 2021: Medical Image Understanding and Analysis, Lecture Notes in Computer Science Book Series, Cham, Springer, vol. 12722, pp. 408–422, 2021.
  15. H. Lin, H. Chen, L. Weng, J. Shao and J. Lin, “Automatic detection of oral cancer in smartphone-based images using deep learning for early diagnosis,” Journal of Biomedical Optics, vol. 26, no. 08, pp. 086007–086017, 2021.
  16. K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv: 1409.556, 2014.
  17. K. He, X. Zhang, S. Ren and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770–778, 2016.
  18. M. A. Khan, I. Ashraf, M. Alhaisoni, R. Damaševičius, R. Scherer et al., “Multimodal brain tumor classification using deep learning and robust feature selection: A machine learning application for radiologists,” Diagnostics, vol. 10, no. 8, p. 565, 2020.
  19. S. Aziz, E. A. Mohamed and F. Youssef, “Traffic sign recognition based on multi-feature fusion and ELM classifier,” Procedia Computer Science, vol. 127, no. 13, pp. 146–153, 2018.
  20. S. Shadravan, H. R. Naji and V. K. Bardsiri, “The sailfish optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems,” Engineering Applications of Artificial Intelligence, vol. 80, no. 14, pp. 20–34, 2019.
  21. G. Tanriver, M. S. Tekkesin and O. Ergen, “Automated detection and classification of oral lesions using deep learning to detect oral potentially malignant disorders,” Cancers, vol. 13, no. 11, pp. 2766, 20
  22. R. A. Welikala, P. Remagnino, J. H. Lim, C. S. Chan, S. Rajendran et al., “Automated detection and classification of oral lesions using deep learning for early detection of oral cancer,” IEEE Access, vol. 8, pp. 132677–132693, 2020.
  23. S. Panigrahi, J. Das and T. Swarnkar, “Capsule network based analysis of histopathological images of oral squamous cell carcinoma,” Journal of King Saud University - Computer and Information Sciences, vol. 25, no. 6, p. S1319157820305280, 2020.

Cite This Article

M. A. Duhayyim, A. A. Malibari, S. Dhahbi, M. K. Nour, I. Al-Turaiki et al., "Sailfish optimization with deep learning based oral cancer classification model," Computer Systems Science and Engineering, vol. 45, no.1, pp. 753–767, 2023.


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1256

    View

  • 662

    Download

  • 0

    Like

Share Link