Enhanced Detection of Glaucoma on Ensemble Convolutional Neural Network for Clinical Informatics

Irretrievable loss of vision is the predominant result of Glaucoma in the retina. Recently, multiple approaches have paid attention to the automatic detection of glaucoma on fundus images. Due to the interlace of blood vessels and the herculean task involved in glaucoma detection, the exactly affected site of the optic disc of whether small or big size cup, is deemed challenging. Spatially Based Ellipse Fitting Curve Model (SBEFCM) classification is suggested based on theEnsemble for a reliable diagnosis of Glaucoma in theOptic Cup (OC) and Optic Disc (OD) boundary correspondingly. This research deploys the Ensemble Convolutional Neural Network (CNN) classification for classifying Glaucoma or Diabetes Retinopathy (DR). The detection of the boundary between the OC and the OD is performed by the SBEFCM, which is the latest weighted ellipse fitting model. The SBEFCM that enhances and widens the multi-ellipse fitting technique is proposed here. There is a preprocessing of input fundus image besides segmentation of blood vessels to avoid interlacing surrounding tissues and blood vessels. The ascertaining of OC andODboundary, which characterized many output factors for glaucoma detection, has been developed byEnsembleCNNclassification,which includes detecting sensitivity, specificity, precision, andArea Under the receiver operating characteristic Curve (AUC) values accurately by an innovative SBEFCM. In terms of contrast, the proposed Ensemble CNN significantly outperformed the current methods.


Introduction
The diagnostic speed is optimized, and computer-aided diagnostics assist the location of a specific area. The damage of blood vessels by prolonged Diabetes mellitus causes Diabetic Retinopathy (DR), affecting the eye's (retina) rear side. The retina produces new and abnormal blood vessels. If there is a growth of blood vessels on the iris, then eyes are blocked by fluid flows and high pressure in the eyes, indicating Neo-vascular Glaucoma. Glaucoma Detection (CD) is very challenging since there will not be any pains or symptoms, and the vision is also normal at the initial stage. Only in the advanced stage will patients lose 70.19% of their vision. Therefore, it is inevitable to do periodical screening of the eye to detect glaucoma early. Ophthalmologists extensively use Fundus photography for DR detection [1].
For medical diagnosis, it is very common to follow the following three components preprocessing, selection of feature, and classification of disease. The exact localization of the damaged OD and either the too large or small-sized cup are the challenging aspects in GD. As shown in Fig. 1, various clinical features like Micro Aneurysms (MA), Hard and Soft exudates, and haemorrhages are found in DR. Thus, feature extraction is believed to be in GD a significant part [2]. About the ischemic change and vessel degeneration, the DR classification is named Proliferative Diabetic Retinopathy (PDR) and Non-Proliferative Diabetic Retinopathy (NPDR). While mild (MA's existence), moderate and acute stages are the sub-categories of NPDR, DR is Proliferative Diabetic Retinopathy (PDR) advanced stage. Since the blood vessels are interconnected, optic cup boundary extraction is believed to be a complex task [3,4].
Previous research implements ground-breaking techniques such as Particle Swarm Optimization and enhanced ensemble Deep CNN to segment OD incorporating retinal images. The ensemble segmentation technique overcomes directly connected bias [5]. The spatial-aware joint image segmentation solves the dual issues of the optic nerve head, such as vessel variant spatial layout and small-sized spatially sparse OC boundaries [6]. The optic nerve head multi-indices are measured by Multitasking Collaborative Learning Network (MCL-Net) because of the noticeable difference between extremely poor contrast amidst optic head regions and substantial overlap. However, identifying OC/OD boundary with indistinct Glaucoma fundus is difficult [7]. Creating a one-stage multi-label model resolves OC and OD segmentation problems using a multi-label Deep Learning (DL) framework viz., M-Net. When comparing to horizontal, vertical OD and OC achieved greater accuracy [8]. However, due to the small data size of DL techniques, they are usually not used for glaucoma assessment for medical image analysis. Therefore, the Retinal Fundus Glaucoma (REFUGE) challenge that contains 1200 fundus images in vast data resolved this problem.
There has been the performance of two initial tasks viz., OD/OC segmentation and Glaucoma classification [9]. For incorporating optic disc boundary and integration of global fundus pertained to the profound hierarchical context, a DL model based on the Disc-aware Ensemble Network (DENet)-disc aware ensemble network was further developed. Moreover, even in the absence of segmentation, the DENet enables accurate GD [10]. There is a possibility of non-direct segmentation of the OD and OC. Therefore, similar to adorned ellipses within the boxes, there is a detection and evaluation of OD and OC of minimum bounding boxes. The OD and OC boundary box detection process uses joint Region-Based Convolutional Neural networks (RCNN) viz., Faster RCNN [11]. Thus, with an innovative CNN-based Weakly Supervised Multitask Learning, Weekly Supervised Multitask Learning (WSMTL) multi-scale fundus image representation was implemented. The classification is performed on binary diagnosis core features such as pixel-level proof map, diagnosis prediction, segmentation mask, and normal or Glaucoma [12]. Glaucoma is the second foremost reason for blindness. A user-friendly app called Yanbao was developed for premium-quality Glaucoma screening [13][14][15][16]. The critical problems inside the retinal imageries are supposed to be complicated blood vessel structure and acute intensified similarity. The Locally Statistical Active Contour Model (LSACM) overcomes it. LSACM integrates the multi-dimensional features-based probability information. The retinal disease such as Glaucoma, macular degeneration, and DR in fundus images are detected early using the crucial step of optic disc segmentation and localization. For OD segmentation, a new CNN is used for this process [17]. The two classical Mask Region-Based Convolutional Neural Networks (R-CNN) were generated to enhance optic nerve head and OD/OC segmentation in retinal fundus images. This study invented an advanced technique by cropping output in association with actual training imageries through different scales-vertical OC to OD proportion computed for diagnosis of glaucoma, to enhance the detection [18][19][20].
The issue of OD segmentation in retinal fundus imageries is the source of the issue of regional classification with the aid of a systematic classification pipeline. The area surrounding the image is characterized in a classification framework using textural and statistical properties, and thus contrary to the OD localization analogous to multi-layered challenges, this technique is potential [21]. The semantic segmentation is done using the Densely Connected Convolutional Network (Dense-Net) incorporated with a completely convolutional network named Fully Convolutional Dense Network (FC-Dense-Net) designed for it. Hence, in this study, the performance of OD and OC automatic segmentation was done with effectiveness. The automated classification results are enhanced into disc ratio and horizontal OC to OD ratio by the vertical OD/OC.
The complete profile of OD is, however, used to enhance the diagnosed results [22]. Because of the vessel's presence, OD segmentation challenges occur, and it is likely to be overcome using pre-processing techniques. Morphological operations like closing and opening or histogram equalization are used in this technique. Nevertheless, OC segmentation challenges are comparatively complicated than OD due to the optic cup interlacing with neighboring tissues and blood vessels. Therefore, for an exact diagnosis of glaucoma, improved segmentation techniques are inevitable [23]. The precision of glaucoma diagnosis is enhanced using various images. However, only some filters have been trained as the whole potential is not achieved. It results in an unassured state of not capturing every variable of retinal imageries from the give images [24]. The intensity and texture-based feature extraction method mines specific features, which are then classified using Magnetic Resonance Imaging (MRI) and Support-Vector Machine (SVM) performance [25]. In order to bring the possible outcome that ascertains the association among the attributes, an appropriate classifier algorithm is required [26][27][28][29][30]. The SVM based on the Radial Basis Function (RVF) Kernel is used to develop the other multiclass lesion classification system, and the lesions are classified into normal and abnormal classes using a hybrid color image structure descriptor retinal images [31][32][33][34][35].
The complexity of OC boundary extraction caused due to the interweaving of blood vessels is the challenging thing found in the existing studies. Usually, pre-processing, selection of features, and classification of disease are adopted in medical diagnosis. In this research work, the exact localization of the damaged OD depending on the too small or large cup is the greatest challenge in detecting glaucoma. The proposed Ensemble classification includes an SBEFCM that overcomes these challenges. The following are the key findings of the research: To accomplish OD and OC using the SBEFCM algorithm accurately.
• The SBEFCM algorithm is used to detect DR or Glaucoma in retinal images.
• To integrate the current methodological approach with new methods and tools.
The recommended Ensemble CNN classification with SBEFCM is described in Section 2, succeeded by assessing and relating it with other techniques exhibited in Section 3. Lastly, in Section 4, the paper Concluded.

Proposed Methodology
The Glaucoma method is implemented by Ensemble CNN classification in this research work. In addition, a new spatially weighted ellipse fitting model detects the OD and OC boundaries. Formerly, pre-processing was done on the input fundus image, and interweavement with neighbouring tissues and blood vessels is prevented by performing blood vessel segmentation. Fig. 2 shows the overall flow of the proposed system:

Fundus Image Training
Low-intensity false regions are the form of the actual input fundus image. Transformation of the image into a domain is included in the denoising technique, where the threshold enhances easy noise recognition. The inverted change is employed to rebuild the denoise. Therefore, denoising the image that preserves the vessel edges is inevitable to eliminate the false regions [36][37][38][39].

Segmentation of Vessel
The fixed threshold scheme is used to mine the vascular structure. The decision variable is described as a specific constant in this scheme for segmentation. The acquirement of the binary image bw x,y takes place due to fixed threshold as Eq. (1) if I x,y is a grayscale image.
Every given dataset image has a fixed threshold parameter value, and variation is shown from dataset to dataset. Many threshold value selections make the selection of threshold on the individual datasets about accuracy evaluation. The last threshold produces greater accuracy on the given dataset.

Post-Processing
Removing spur pixels, gap filling, and area filtering are the three post-processing operations applied to enhance system performance. Removal of minor remote and inevitable areas inappropriate to the vessel's structure is done by the area filtering operation. The first labeling of pixels into components is obtained using Connected Component Analysis based on 8-way pixel connectivity. Then, its classification as vessel/non-vessel is done by measuring each labeled component area. Next, the spur pixels are removed at vessel structure edges. Finally, morphological closing operation based on disk type structuring is applied to fill small gaps.

Spatially Based Ellipse Fitting Curve Model
The multi-ellipse fitting technique is optimized and prolonged by the proposed SBEFCM. The Two-Dimensional (2D) shape is characterized by the binary image I. The background B g (I p = 0)/the foreground F g (I p = 1) is appropriated by p, the pixel of I. Eq. (2) represents the 2D shape area.
Moreover, single Area |E n | set the s ellipses E n . The binary image represents bw im and therefore to any of the ellipses E n or else bw p im = 0, bw p im = 1 at t points is included. Moreover, the provided set of ellipses E defines the α E Coverage of the 2D shape as in Eq. (3).
Ellipses E is the basis of 2D points shape of a percentage α E . Let |E| be the total area of all ellipses, then |E| = s n=1 |E n |. Let all ellipses that cover the area be referred as C area and expressed in the following Eq. (4): Equality holding is defined as pairwise disjoint in the case of C area ≤ |E| E. Twotimes |E| counts the intersection area, while the intersection area is not counted by C area .The set of parameters E * of s ellipses E * n are evaluated in the multi-ellipse fitting method and hence in Eq. (3) α E * is represented beforehand, and it is maximized based on Equal area restraints. Based on the condition, it holds |E * | = Area 2D . Contrarily, we increase the α E * in SBEFCM, shape analysis with E * , a set of ellipses where C area * probably closer to Area 2D . Eq. (5) expresses it.
The Akaike Information Criterion (AIC) trades-off among the shape coverage α E and the intricacy of the model to assess the optimal number of ellipses s. The source of circles focused on the 2D skeleton shape are used to calculate these. Using the selection criterion model based on AIC, the minimization quantity of overall possible ellipses amount s. Eq. (6) represents this.
Subconsciously, more balance is obtained among the shape coverage, and with improved model complexity, many ellipses are used for a few shape approximations. The invariant for translation measures the model selection process and shape complexity, and because the resolution/quantization problems make modifications in scale, it affects the shape rotation mildly.

Spatially Based Ellipse Fitting Curve Model Algorithm
The multi-ellipse fitting method and SBEFCM functions are very similar, and it is summarized in the following: (a) Skeleton Extraction: Primarily, the 2D calculation of medial/skeleton shape S k is done, that provides essential particulars regarding parameters of the ellipse, in turn, approximating the actual shape. Compared to the critical skeleton branch causing the small perturbation of the shape boundary, the medial axis skeletonization is sensitive to minor boundary diffusion. Instead of the medial axis under skeletonization, shape thinning with a closing morphological filter simplifies this problem. (b) Normalization of Ellipse Hypotheses: The circles set Cluster Circle (CC) used as ellipse hypotheses are defined by SBEFCM. The CC is placed on S, and CC radii are explained from the shape contour by centers of minimum distance. The circles are focused on CC inclusion in a descending order based on the radius. Firstly, CC = ∅. When the already selected circles overlap under a certain threshold in every focused CC, the circle is presented. The circle radius is reduced by SBEFCM complexity and cardinality, and less than 3% of the maximum is disregarded. With an upper bound of ellipses fitting, a lot of primary circles' hypotheses are generated. (c) Ellipse Hypotheses Evolution: To calculate the predetermined ellipse parameters in E with a 2D shape α E , the Gaussian Mixture Model Expectation-Maximization technique is deployed. This is achieved in two steps: (1) Allocating ellipses with shape points and (2) Assessing the parameter of the ellipse.
• Allocating Shape Point to Ellipses: When 't' is within that ellipse, then allocate point t to an ellipse E n . Usually, Eq. (7): where, From Eq. (8), it is understood that k n is the source of the ellipse E n , t and k n are connected by the intersection of the line t . The term · is given by Ellipse and 2D vector length. Therefore, for t points placed on E n Boundary, F (t, E n ) = 1.0. With more than one ellipsis, the ellipses may overlap, and there may be an association of a shape point t.
• Estimating the Ellipse Parameters: As shown in the assignment step, the moments of second-order points related to the earlier ellipse, the ellipse parameters E n are updated directly.
(d) An Optimal Number of Ellipses Solving: Numerous models are assessed based on the AIC criterion explained above that equates the trade-off among approximation error and model complexity. SBEFCM that minimizes the number of ellipses reduces the AIC criterion from a huge description of the automatic settings. Because there is no lesser bound on AIC, the number of ellipses is minimized. This process continues until the ellipses get a single ellipse. In every iteration, an ellipse pair is chosen and represented for merging them. The multi-ellipse fitting method is considered to merge parallel ellipses. To merge, nonetheless, SBEFCM is considered any ellipses pair. The last merged pair leads to the lowest AIC. From all the likely models, SBEFCM reported a final solution with reduced AIC. (e) Spurious Solutions are Forbidden: The points percentage united the remaining overlapped ellipses is the definition of overlap ratio for E n ellipse, the O r (E n ). Eq. (9) expresses this.
A predefined threshold T pre = 95% forbidden is less significant than an ellipse E n with O r (E n ) overlap, which is considered as a model part. An illustration of this in Fig. 2. Thus, there is less contribution to image foreground coverage associated with result inclusive of only E 1 and E 3 , as E 2 ellipse displays more O r (E 2 ) value. It has been omitted from the result. Fig. 3 is taken as an example in which the T pre the value that manipulated the segmentation results is fixed roughly. SBEFC's application with T pre = 40% and T pre = 95% are displayed as segmentation in the left as and the actual neighboring segmentation is the right correspondingly. The computational complexity of the multi-ellipse fitting method is analogous to SBEFCM, as same as (r2f). F and r = |RR| are the number of foreground pixels that determine the number of circles, which develops the hypotheses of ellipses primarily demonstrating the 2-D shape. The Bradley segmentation technique and hole filling method are the first steps in the proposed approach. Better results are achieved through the image smoothing technique, for example, before the Gaussian filter σ G = 2 is achieved. Using the local statistics of first-order images surrounding every pixel, the Bradley technique computes the locally adaptive image threshold. Otsu's technique that exhibits variations in illumination is superseded by this potential method is contrary to the illumination changes. The limitation of the Bradley is that the background segmentation with locally brilliant contrast is misinterpreted as cells. The FP limitation is reduced by introducing the two-shape and single appearance-based restraint.
• (Shape) Area constraint-The anticipated area of each disc scale above the minimum threshold T δ , Therefore, there is an elimination of small segments. This visible rejection is avoided by partially imposing T δ on the expected area estimation, it is computed as the Area of the perfectly appropriated circle, rather than the computed object area. • (Shape) The disc-shaped circular or elliptically shaped objects eliminate the complex shape objects that digress substantially for roundness constraint. The Roundness measure R shows the shape resemblance, and Eq. (10) expresses this.
where the area and perimeter of an object are defined as δ and ϑ. Roundness R takes the maximum value for the accurate circle. For the projected SBEFCM method, the essentiality of roundness of R > 0.2 for a region is signified as a disc.
(f) Constraints of Intensity: For the prohibition of many false positives, the previously mentioned shape constraints are adequate. The various constraints based on intensity do not accustom with restrictions based on the prohibition of false positives. This perception on constraints is that the intensity distribution inside the left-over cells is the same as intensity distribution inside the cell. Standard distribution assumption based on twointensity distribution's distance is measured using the Bhattacharyya distance. d 1 and d 2 are considered as two distributions, μ n are the means and σ 2 n are the variances where n ∈ {1, 2}. Eq. (11) expresses the Bhattacharyya distance B D (d 1 , d 2 ) as below.
The proposed constraints prohibit the four false positives.

Classification 2.3.1 The framework of SBEFCM Ensemble
Ensemble's proposed framework is CNN, referred to as pipelines when simulated multiple times. Each one has been trained on a subset of class labels with a change of evaluation metrics. The subset of the dataset is otherwise the subset of classes/labels. These characteristically trained subsets are mutually exclusive. The main benefit for huge datasets with CNN training is perceived as multifold, whereas less time is taken for training on a subset of classes that do not include highversion Graphics Processing Unit (GPU) requirements. Initially, in Fig. 4, the transfer learning process shows all the properties while the new model received the primary weights from the trained Deep Convolutional Neural Networks (AlexNet). All classes contribute to selecting reference images, and Hierarchical agglomerative clustering is contained in identical image grouping. Based on this grouping offered to the network post-training stage, mutually exclusive subsets are produced.
(a) Transfer of Learning: By moving to the latest network training, the AlexNet model on ImageNet effectively learned mid-level representation of the feature. The extended attributes or mid-level attributes are captured in the first 7 layers. The primary Conv layer to the following associated layer is described. These layers' learned weights are used in the out model, and during the training, it is left un-updated and kept constant. To the ImageNet, the source task's classifier and final associated layer FC8 are more specific; it is prohibited. If the novel Fully-Connected 8 is included, followed by the Softmax classifier, it should be retrained.
The primary convolution function is given in Eq. (13) if the input image is I(x,y) and the filter is w f [p,q] With variations in parameters, the same number of pipelines structured in the ensemble is trained on the entire dataset for the comparative study. The errors of Top-1 and Top-5 are measured. In the training phase, both metrics are reduced. The test image and the pipelines provided by the assumption have learned the characteristics accurately and tracked them with higher feasibility of inappropriate classification channels.

Dataset
This proposed study used the Large-scale Attention-based Glaucoma (LAG) database and Retinal IMage database for the Optic Nerve Evaluation (RIM-ONE) database to detect glaucoma. As shown in Tab. 1, the LAG database contains 11,760 fundus images, 4,878 positive, and 6,882 negative glaucoma images. Thus, 10,861 samples exist in the LAG database. From 10,147 individuals, only one fundus image is collected (one image per eye and subject), and the remaining individuals are identical to multiple images per subject. The Beijing Tongren Hospital and the Chinese Glaucoma Study Alliance (CGSA) are the sources of fundus images. In utilizing the proposed techniques, the capacity of treatment and finding of intimidating eye diseases with Glaucoma has been enhanced.
The analysis f the proposed method on the rest of the open database, such as RIM-ONE, focuses on the Open Nerve Evaluation (ONH), producing 169 high-resolution imageries. The apt abnormalities were detected with the help of highly proper segmentation algorithms. At last, a lot of public and private resources across the hospitals supplied the local glaucoma dataset images. A total of 1338 retinal images constitute the dataset. One of four classes is appropriated with every image: Normal, Early, Moderate, and Advanced Glaucoma. 79% of images have consisted in this kind of data, and 21% of images are presented in the remaining three classes. Thus, 79.08% images, 8.86% early glaucoma images, 5.98% of moderate glaucoma, and 5.98% of advanced glaucoma are presented with no glaucoma/standard images.

Performance Metrics
Several performance measures' values such as Accuracy, Sensitivity, Specificity, and Area Under the Curve (AUC) are calculated by performing Performance analysis.
(a) Accuracy: For the shared values, the metrics are Accuracy, and it is given the name of weighted arithmetic mean, and it is deemed the reverse value of the correct value. Using the formula given in Eq. (14), the accuracy can be calculated.

Results of Glaucoma Detection Methods
Innumerable methods are used to evaluate the severity of Glaucoma detection comparatively. Using the LAG, RIM-ONE, the analogy was carried out. The following Tab. 2 illustrated the proposed glaucoma detection's Sensitivity, Specificity, Accuracy, and AUC result associated with the existing methods. The proposed SBEFCM's performance with the LAG dataset revealed an accuracy of 97%, a sensitivity of 94.44%, a specificity of 91.30%, and an AUC of 0.941 AUC glaucoma detection retinal images. In the glaucoma detection of retinal images, contrarily, the RIM-ONE shows the Accuracy of 93%, the Sensitivity of 94.44%, Specificity of 91.30% Specificity, and AUC of 0.941. As the table shows, rather than deep CNN, another prevailing method, the proposed Ensemble classification with SBEFCM achieved better. The Sensitivity ratio is greater than the Specificity ratio. There is a result of larger AUC when compared with existing methods. In all metrics, the LAG database displays greater values compared to the RIM-ONE dataset. As other indicators like the field of vision and intraocular pressure authorized glaucoma diagnosis, the sensitivity metric is crucial. It demonstrated the results of high generalization ability. As the performance of the existing method demonstrates the overfitting issue, it is highly degradable [26]. Using local retinal glaucoma datasets, the accuracy of the proposed method's classification results and their analogy with other prevailing methods are shown in Tab. 3. Better statistical values are shown in the proposed Ensemble-based classification with the SBEFCM algorithm than the existing methodologies for GD. Hence, the proposed SBEFCM produced an accuracy of 99.57% in the detection of glaucoma. Tab. 4 shows the precise detection of the input sample images, the optic disc OC, and the Optic Boundary OD. The proposed SBEFCM accurately predicts the OC and OD boundary prediction ratios.

Conclusions
An innovative segmentation method named the Spatially Based Ellipse Fitting Curve Model used in this proposed study detects the OC and OD boundary more precisely. Moreover, the Ensemble-based CNN classification detects the intensity of Glaucoma for apt prediction. In the multi-ellipse fitting method, which is as same as the SBEFCM, spurious solutions are forbidden. The ensemble framework with the same number of pipelines is trained for the comparative study on the complete dataset with parametric variations. Better statistical values like Accuracy, Specificity, Sensitivity and AUC are achieved in the proposed method. The Large-scale Attention-based Glaucoma dataset exhibits greater values than the Retinal Image database for the Optic Nerve Evaluation dataset. Based on this proposed algorithm, DR is classified for future development in the aspects of its mild, moderate, and severe PDR and NPDR.
Funding Statement: The authors received no specific funding for this study

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study