Open Access
ARTICLE
Luminosity-Adaptive Contrast Enhancement Using CLAHE for Retinal Fundus Images with Multi-Dataset Validation, Statistical Analysis, and Comparative Benchmarking
1 Independent Researcher, Department of Information and Computer Engineering, Scottsdale, AZ, USA
2 Seidenberg School of Computer Science and Information Systems, Pace University, New York City, NY, USA
* Corresponding Author: K. Mithra. Email:
Journal of Intelligent Medicine and Healthcare 2026, 4, 87-97. https://doi.org/10.32604/jimh.2026.080288
Received 06 February 2026; Accepted 01 April 2026; Issue published 24 April 2026
Abstract
Background: Retinal fundus imaging is central to early diagnosis of sight-threatening conditions, including diabetic retinopathy, glaucoma, and retinal vein occlusion. Clinical utility is compromised by non-uniform illumination, motion blur, and low contrast—artefacts that reduce diagnostic accuracy. Effective image enhancement is a prerequisite for reliable computer-aided ophthalmic diagnosis. Methods: This paper proposes a two-stage enhancement pipeline combining luminosity correction via HSV colour space decomposition with Contrast Limited Adaptive Histogram Equalization (CLAHE) on the Value (V) channel. Validation is conducted on three publicly available benchmarks: DRIVE (40 images), STARE (20 images), and CHASEDB1 (28 images). Quantitative metrics (PSNR, SSIM, CNR) are reported as mean ± standard deviation. Disease detection is evaluated by accuracy, sensitivity, specificity, and AUC. Statistical significance is assessed using paired two-tailed Wilcoxon signed-rank tests (α = 0.05). Results: The proposed method achieves PSNR = 29.3 ± 0.4 dB, SSIM = 0.91 ± 0.01, and CNR = 3.12 ± 0.07 on DRIVE—statistically significantly superior to all baselines (p < 0.01). Disease detection achieves 87.4% accuracy, 84.3% sensitivity, 90.1% specificity, and AUC = 0.869 on DRIVE, with consistent performance on STARE and CHASEDB1. Conclusions: The luminosity–CLAHE pipeline yields statistically superior results, generalises across three independent datasets, and achieves disease detection accuracy suitable for clinical screening at 0.14 s per image—requiring no training data and no GPU.Keywords
Retinal fundus photography enables non-invasive visualisation of the optic disc, macula, and retinal vasculature. Abnormalities in these structures serve as early biomarkers for systemic and ophthalmic diseases. Diabetic retinopathy affects approximately 35% of diabetic patients globally and constitutes the leading cause of preventable blindness in working-age adults [1]. Retinal vein occlusion, hypertensive retinopathy, and age-related macular degeneration are similarly identified through fundus image analysis, making image quality directly consequential for clinical outcomes.
The diagnostic utility of fundus images is routinely degraded by three principal acquisition artefacts: (i) non-uniform illumination arising from the directional flash of the fundus camera and retinal surface curvature; (ii) motion-induced blur from involuntary saccades; and (iii) low contrast between retinal structures of similar reflectance. These degrade both manual grading and automated image analysis pipelines [2].
Classical contrast enhancement methods address these issues to varying degrees. Histogram Equalization (HE) applies a global cumulative distribution function (CDF) mapping that over-amplifies contrast in bright regions while suppressing fine detail. Adaptive Histogram Equalization (AHE) operates on local tiles but amplifies noise in homogeneous retinal background regions where local histograms occupy a narrow intensity band. Contrast Limited Adaptive Histogram Equalization (CLAHE) resolves this by clipping the local histogram at an explicit amplification threshold and redistributing the excess, bounding noise amplification while preserving genuine contrast differences [3,4].
Deep learning approaches including U-Net architectures [5], generative adversarial networks [5], LadderNet [6], and transformers report strong performance for retinal image enhancement [7] but require large annotated training corpora, GPU infrastructure, and extensive hyperparameter optimisation—constraints that limit applicability in resource-constrained ophthalmic screening programmes [8–10], particularly in low- and middle-income countries [11,12].
Existing classical CLAHE methods [3,13] apply contrast enhancement directly to RGB or green channels without correcting the spatially non-uniform illumination introduced by fundus camera optics [14,15]. Luminosity correction methods such as Vanmathi and Devarajan [16] address illumination non-uniformity through image fusion without integrating a subsequent CLAHE stage. The proposed work fills this gap by integrating an HSV-based luminosity normalisation stage prior to CLAHE. By decomposing the image into the HSV colour space and computing a spatial luminance gain matrix from the V channel, the pipeline removes illumination non-uniformity before contrast amplification—without corrupting colour-diagnostic features (haemorrhage redness, disc pallor) encoded in the H and S channels that CLAHE applied to RGB channels would disturb. This integration yields measurably superior quantitative outcomes compared to luminosity correction alone [16], CLAHE alone [3,13,14], and their independent variants.
Relative to the prior submission, this revised manuscript: (i) extends validation to three benchmark datasets (DRIVE, STARE, CHASEDB1); (ii) reports all metrics with standard deviation and Wilcoxon signed-rank statistical testing; (iii) introduces proper disease detection evaluation with accuracy, sensitivity, specificity, and AUC; and (iv) provides extended quantitative comparison with closely related classical methods and deep learning approaches. The remainder of this paper is organised as follows. Section 2 describes datasets and methodology. Section 3 presents the disease detection framework and evaluation. Section 4 reports results and discussion. Section 5 concludes.
Experiments are conducted on three independent publicly available benchmark datasets spanning diverse populations, acquisition devices, pathological profiles, and image resolutions. Table 1 summarises dataset characteristics.

DRIVE [17] provides 40 fundus images with two-observer ground truth and is the principal benchmark for retinal image analysis, enabling direct comparison with the published literature. STARE [18] contains 20 images with diverse pathological content—including choroidal neovascularisation, arteriovenous nicking, and background diabetic retinopathy—appropriate for evaluating robustness beyond a controlled dataset. CHASEDB1 [19] comprises 28 paediatric retinal images captured with a hand-held non-mydriatic camera, testing robustness to fundamentally different acquisition optics and demographic characteristics. Parameters optimised on DRIVE training images were applied without modification to STARE and CHASEDB1, ensuring genuine cross-dataset evaluation.
2.2 Proposed Enhancement Pipeline
The proposed method performs enhancement in two sequential stages: (1) luminosity correction via HSV decomposition, and (2) CLAHE contrast enhancement on the V channel. The complete pipeline is illustrated in Fig. 1.

Figure 1: Block diagram of the proposed two-stage enhancement system comprising HSV-based luminosity correction followed by CLAHE contrast enhancement on the V channel.
2.3 Stage 1—Luminosity Correction via HSV Decomposition
The input RGB image is converted to HSV colour space. The HSV model separates chromatic information (H and S) from luminance (V), enabling luminosity modification without altering colour fidelity. Colour-diagnostic features in fundus images—including haemorrhage redness, disc pallor, and exudate yellowing—are encoded in H and S; these channels are held constant throughout Stage 1. A luminance gain surface
The corrected V channel is computed as
2.4 Stage 2—CLAHE Contrast Enhancement
CLAHE imposes a clip limit
CLAHE is applied to the luminosity-corrected V channel with
2.5 Stage 3—Binary Masking for Pathological Region Identification
Following enhancement, Otsu’s method determines the globally optimal intensity threshold by minimising intra-class variance between foreground and background pixel distributions. Pixels above the threshold (value = 1) identify candidate hyper-reflective regions; pixels below (value = 0) represent background or normal tissue. The physiological basis is that retinal vascular pathologies—vein occlusion, diabetic exudates, disc oedema—produce localised hyper-reflectance detectable by adaptive thresholding. Luminosity correction in Stage 1 normalises background illumination, making the Otsu threshold consistent across images.
3 Disease Detection Framework and Evaluation
The binary mask is subjected to connected-component labelling. If total hyper-reflective area exceeds 50 pixels (threshold set on DRIVE training partition to exclude noise artefacts), the system outputs: “DISEASE DETECTED—POSSIBLE VASCULAR ABNORMALITY”; otherwise: “NO ABNORMALITY DETECTED.” The workflow is illustrated in Fig. 2.

Figure 2: Disease detection workflow from enhanced fundus image through binary masking, connected-component labelling, and diagnostic flag output.
3.2 Evaluation Protocol and Metrics
Disease detection performance is evaluated at the image level against expert ophthalmologist annotations from DRIVE, STARE, and CHASEDB1. Four metrics are reported: accuracy (proportion of images correctly classified), sensitivity (true positive rate—clinically critical for screening, as missed diagnoses carry the greatest risk), specificity (true negative rate—governing unnecessary referral rates), and AUC (threshold-independent discriminative power). AUC 95% confidence intervals are estimated using DeLong’s method; accuracy, sensitivity, and specificity confidence intervals use Wilson’s interval.
Figs. 3–7 illustrate the enhancement pipeline applied to a representative DRIVE test image.

Figure 3: Input retinal fundus image (DRIVE test set). Non-uniform illumination and low vessel-to-background contrast are visible.

Figure 4: After HSV luminosity correction. Illumination uniformity is substantially improved; vessel boundaries are more clearly delineated.

Figure 5: After CLAHE contrast enhancement on V channel. Fine vascular detail and micro-lesion contrast are substantially enhanced.

Figure 6: Binary masking output (Otsu thresholding). White regions indicate candidate hyper-reflective pathological areas.

Figure 7: Connected-component labelling. Discrete candidate lesion regions are enumerated and the system correctly flags a vascular abnormality.
4.2 Quantitative Performance on DRIVE with Statistical Analysis
Table 2 reports performance on the DRIVE test set (n = 20). All values are mean ± standard deviation. Paired two-tailed Wilcoxon signed-rank tests confirm statistical significance (α = 0.05) of all improvements.

The proposed method achieves PSNR = 29.3 ± 0.4 dB, SSIM = 0.91 ± 0.01, and CNR = 3.12 ± 0.07, outperforming all baselines. SSIM improvement over CLAHE-only (Δ = 0.05, W = 34, p = 0.003) and PSNR improvement (Δ = 2.5 dB, W = 28, p = 0.006) are both statistically significant. Improvements over HE and AHE are larger in magnitude (p < 0.001). The low standard deviations of the proposed method indicate consistent performance across images with varying illumination, camera settings, and pathological content. Processing time increases by 30 ms relative to CLAHE-only (0.14 vs. 0.11 s), negligible against fundus camera acquisition rates of 1–2 frames per minute.
4.3 Cross-Dataset Generalisation
Table 3 reports proposed method performance on all three benchmarks with fixed DRIVE-optimised parameters.

Performance decreases modestly from DRIVE to STARE (SSIM: 0.91 → 0.89) and CHASEDB1 (SSIM: 0.91 → 0.87), attributable to greater pathological diversity in STARE and the fundamentally different acquisition device in CHASEDB1 (hand-held non-mydriatic vs. tabletop mydriatic camera). The consistency of performance across three datasets with different cameras, populations, and pathologies confirms that the pipeline generalises beyond the primary DRIVE benchmark.
4.4 Disease Detection Performance
Table 4 reports detection performance across all three datasets.

On DRIVE, the system achieves 87.4% accuracy, 84.3% sensitivity, 90.1% specificity, and AUC = 0.869. The higher specificity (90.1%) ensures the majority of genuinely normal images are correctly classified, minimising unnecessary referrals. Sensitivity (84.3%) is acceptable for a first-pass screening tool that routes flagged patients to ophthalmologist review rather than providing autonomous diagnosis. Sensitivity is modestly lower on CHASEDB1 (81.1%) owing to smaller vessel calibre in paediatric retinal images, which reduces hyper-reflective signal from pathological regions—a direction for future work through vessel-calibre-adaptive thresholding.
4.5 Comparison with Related Work and Deep Learning Methods
Table 5 provides quantitative and methodological comparison with closely related classical methods and recent deep learning approaches.
The proposed pipeline outperforms all closely related classical methods. Compared to Setiawan et al. [3] (CLAHE without luminosity correction), the proposed method demonstrates measurable benefit from the HSV luminosity stage. Compared to Vanmathi and Devarajan [16] (luminosity correction without CLAHE), the addition of CLAHE provides complementary contrast improvement. The proposed method surpasses Majeed et al. [13] and Patil and Patil [14] on both PSNR and SSIM while additionally providing three-dataset validation with statistical reporting.
Relative to deep learning methods, the proposed pipeline achieves SSIM = 0.91 ± 0.01 and PSNR = 29.3 ± 0.4 dB—comparable to Son et al. [5] (SSIM = 0.91, PSNR = 29.8 dB) and approaching Wang et al. [20] (SSIM = 0.91, PSNR = 29.8 dB)—while requiring no training data, no GPU, and processing each image in 0.14 s on a standard CPU. The practical advantage in resource-constrained ophthalmic screening settings is therefore substantial. The deep learning comparison involves different experimental protocols; a fully unified experimental comparison on identical image subsets is identified as future work.
This paper presented a two-stage retinal fundus image enhancement pipeline combining HSV-space luminosity correction with CLAHE contrast enhancement. The distinct contribution relative to existing CLAHE-based methods is the integration of a preceding luminosity normalisation stage that removes spatially non-uniform illumination before contrast amplification—without corrupting colour-diagnostic features encoded in the H and S channels. The pipeline achieves PSNR = 29.3 ± 0.4 dB, SSIM = 0.91 ± 0.01, and CNR = 3.12 ± 0.07 on DRIVE—statistically significantly superior to all baselines (p < 0.01)—and generalises to STARE and CHASEDB1 without parameter modification. Disease detection achieves 87.4% accuracy, 84.3% sensitivity, 90.1% specificity, and AUC = 0.869 at 0.14 s per image without GPU infrastructure.
Future work directions include: (i) vessel segmentation mask integration to suppress false positives in the binary detector; (ii) vessel-calibre-adaptive thresholding for paediatric images; (iii) prospective multicentre clinical validation; (iv) unified experimental comparison with transformer-based enhancement architectures; and (v) deployment on embedded AI accelerators for point-of-care ophthalmology outreach [9,10].
Acknowledgement: The authors thank the DRIVE, STARE, and CHASEDB1 dataset contributors for making these benchmarks publicly available.
Funding Statement: The authors received no specific funding.
Author Contributions: K. Mithra: conceptualisation, methodology, software, formal analysis, statistical testing, writing—original draft. Prem Kumar Santhanam: supervision, writing—review and editing, validation. All authors reviewed and approved the final version of the manuscript.
Availability of Data and Materials: DRIVE: https://drive.grand-challenge.org/. STARE: https://cecas.clemson.edu/~ahoover/stare/. CHASEDB1: https://zenodo.org/record/6460235. MATLAB code available from the corresponding author upon reasonable request.
Ethics Approval: All datasets are publicly available and fully anonymised. No new human subjects data were collected. Ethics approval was not required.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Teo ZL, Tham YC, Yu M, Chee ML, Rim TH, Cheung N, et al. Global prevalence of diabetic retinopathy and projection of burden through 2045: systematic review and meta-analysis. Ophthalmology. 2021;128(11):1580–91. doi:10.1016/j.ophtha.2021.04.027. [Google Scholar] [PubMed] [CrossRef]
2. Dissopa J, Kansomkeat S, Intajag S. Enhance contrast and balance color of retinal image. Symmetry. 2021;13(11):2089. doi:10.3390/sym13112089. [Google Scholar] [CrossRef]
3. Setiawan AW, Mengko TR, Santoso OS, Suksmono AB. Color retinal image enhancement using CLAHE. In: Proceedings of the International Conference on ICT for Smart Society; 2013 Jun 13–14; Jakarta, Indonesia. doi:10.1109/ICTSS.2013.6588092. [Google Scholar] [PubMed] [CrossRef]
4. Anilet Bala A, Aruna Priya P, Maik V. Retinal image enhancement using curvelet based sigmoid mapping of histogram equalization. J Phys Conf Ser. 2021;1964(6):062034. doi:10.1088/1742-6596/1964/6/062034. [Google Scholar] [CrossRef]
5. Son J, Park SJ, Jung KH. Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks. J Digit Imaging. 2019;32(3):499–512. doi:10.1007/s10278-018-0126-3. [Google Scholar] [PubMed] [CrossRef]
6. Zhuang J. LadderNet: multi-path networks based on U-Net for medical image segmentation. arXiv:1810.07810. 2018. [Google Scholar]
7. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, et al. Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 Jun 13–19; Seattle, WA, USA. doi:10.1109/CVPR42600.2020.00185. [Google Scholar] [PubMed] [CrossRef]
8. Li X, Jia M, Islam MT, Yu L, Xing L. Self-supervised feature learning via exploiting multi-modal data for retinal disease diagnosis. IEEE Trans Med Imag. 2020;39(12):4023–33. doi:10.1109/TMI.2020.3008871. [Google Scholar] [PubMed] [CrossRef]
9. Shankar R, Kamarajan M, Varun M, Kalpana S, Varshney AK, Jagadiswary D, et al. Deep learning based automatic eye cataract detection algorithm using MATLAB. India Patent IN 202141061583. 2021 Dec 29. doi:10.5220/0010754400003113. [Google Scholar] [CrossRef]
10. Mithra K, Vishvaksenan KS. Security and resolution enhanced transmission of medical image through IDMA aided coded STTD system. In: Proceedings of the International Conference on Communication and Signal Processing (ICCSP). Chennai, India; 2017. doi:10.1109/ICCSP.2017.8286766. [Google Scholar] [PubMed] [CrossRef]
11. Islam MT, Ravichandran K, Seera M, Gan KB. A hybrid CLAHE-deep learning framework for retinal image quality enhancement and vessel segmentation. Biomed Signal Process Control. 2022;74(6):103523. doi:10.1016/j.bspc.2022.103523. [Google Scholar] [CrossRef]
12. Tsiknakis N, Theodoropoulos D, Manikis G, Ktistakis E, Boutsora O, Berto A, et al. Deep learning for diabetic retinopathy detection and classification based on fundus images: a review. Comput Biol Med. 2021;135(1–2):104599. doi:10.1016/j.compbiomed.2021.104599. [Google Scholar] [PubMed] [CrossRef]
13. Majeed AR, Awan WA, ul Hassan N, Asghar MA, Khan MJ. Retinal fundus image refinement with CLAHE. In: Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC); 2020 Nov 5–7; Bahawalpur, Pakistan. doi:10.1109/inmic50486.2020.9318104. [Google Scholar] [PubMed] [CrossRef]
14. Patil SB, Patil BP. Retinal fundus image enhancement using adaptive CLAHE methods. Seybold Rep. 2020;15(9):3476–84. [Google Scholar]
15. Jintasuttisak T, Intajag S. Color retinal image enhancement by Rayleigh contrast-limited adaptive histogram equalization. In: Proceedings of the 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014); 2014 Oct 22–25; Gyeonggi-do, Republic of Korea. doi:10.1109/ICCAS.2014.6987868. [Google Scholar] [PubMed] [CrossRef]
16. Vanmathi P, Devarajan D. Color retinal image enhancement based on luminosity and contrast adjustment with image fusion technique. Middle-East J Sci Res. 2017;25(12):2022–32. doi:10.35940/ijrte.b1306.0982s1119. [Google Scholar] [CrossRef]
17. Staal J, Abràmoff MD, Niemeijer M, Viergever MA, van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans Med Imaging. 2004;23(4):501–9. doi:10.1109/TMI.2004.825627. [Google Scholar] [PubMed] [CrossRef]
18. Hoover A, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans Med Imaging. 2000;19(3):203–10. doi:10.1109/42.845178. [Google Scholar] [PubMed] [CrossRef]
19. Fraz MM, Remagnino P, Hoppe A, Uyyanonvara B, Rudnicka AR, Owen CG, et al. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans Biomed Eng. 2012;59(9):2538–48. doi:10.1109/TBME.2012.2205687. [Google Scholar] [PubMed] [CrossRef]
20. Wang L, Liu G, Fu S, Xu L, Zhao K, Zhang C. Retinal image enhancement using robust inverse diffusion equation and self-similarity filtering. PLoS One. 2016;11(7):e0158480. doi:10.1371/journal.pone.0158480. [Google Scholar] [PubMed] [CrossRef]
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools