Open Access
ARTICLE
Hybrid MNLTP Texture Descriptor and PDCNN-Based OCT Image Classification for Retinal Disease Detection
1 Department of E&TC, Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, 412115, India
2 Symbiosis Skills and Professional University, School of Mechatronics, Pune, 412101, India
3 Department of Electrical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
4 Department of Electrical Engineering, College of Engineering, King Khalid University, Abha, 61421, Saudi Arabia
5 Center for Engineering and Technology Innovations, King Khalid University, Abha, 61421, Saudi Arabia
* Corresponding Author: Anurag Mahajan. Email:
Computers, Materials & Continua 2025, 82(2), 2831-2847. https://doi.org/10.32604/cmc.2025.059350
Received 04 October 2024; Accepted 02 January 2025; Issue published 17 February 2025
Abstract
Retinal Optical Coherence Tomography (OCT) images, a non-invasive imaging technique, have become a standard retinal disease detection tool. Due to disease, there are morphological and textural changes in the layers of the retina. Classifying OCT images is challenging, as the morphological manifestations of different diseases may be similar. The OCT images capture the reflectivity characteristics of the retinal tissues. Retinal diseases change the reflectivity property of retinal tissues, resulting in texture variations in OCT images. We propose a hybrid approach to OCT image classification in which the Convolution Neural Network (CNN) model is trained using Multiple Neighborhood Local Ternary Pattern (MNLTP) texture descriptors of the OCT images dataset for a robust disease prediction system. Parallel deep CNN (PDCNN) is proposed to improve feature representation and generalizability. The MNLTP-PDCNN model is tested on two publicly available datasets. The parameter values Accuracy, Precision, Recall, and F1-Score are calculated. The best accuracy obtained specifying the model’s overall performance is 93.98% and 99% for the NEH and OCT2017 datasets, respectively. With the proposed architecture, comparable performance is obtained with a subset of the original OCT2017 data set and a comparatively smaller number of trainable parameters (1.6 million, 1.8 million, and 2.3 million for a single CNN branch, two parallel CNN branches, and three parallel network branches, respectively), compared to off-the-shelf CNN models. Hence, the proposed approach is suitable for real-time OCT image classification systems with fast training of the CNN model and reduced memory requirement for computations.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.