Open Access
ARTICLE
Luminosity-Adaptive Contrast Enhancement Using CLAHE for Retinal Fundus Images with Multi-Dataset Validation, Statistical Analysis, and Comparative Benchmarking
1 Independent Researcher, Department of Information and Computer Engineering, Scottsdale, AZ, USA
2 Seidenberg School of Computer Science and Information Systems, Pace University, New York City, NY, USA
* Corresponding Author: K. Mithra. Email:
Journal of Intelligent Medicine and Healthcare 2026, 4, 87-97. https://doi.org/10.32604/jimh.2026.080288
Received 06 February 2026; Accepted 01 April 2026; Issue published 24 April 2026
Abstract
Background: Retinal fundus imaging is central to early diagnosis of sight-threatening conditions, including diabetic retinopathy, glaucoma, and retinal vein occlusion. Clinical utility is compromised by non-uniform illumination, motion blur, and low contrast—artefacts that reduce diagnostic accuracy. Effective image enhancement is a prerequisite for reliable computer-aided ophthalmic diagnosis. Methods: This paper proposes a two-stage enhancement pipeline combining luminosity correction via HSV colour space decomposition with Contrast Limited Adaptive Histogram Equalization (CLAHE) on the Value (V) channel. Validation is conducted on three publicly available benchmarks: DRIVE (40 images), STARE (20 images), and CHASEDB1 (28 images). Quantitative metrics (PSNR, SSIM, CNR) are reported as mean ± standard deviation. Disease detection is evaluated by accuracy, sensitivity, specificity, and AUC. Statistical significance is assessed using paired two-tailed Wilcoxon signed-rank tests (α = 0.05). Results: The proposed method achieves PSNR = 29.3 ± 0.4 dB, SSIM = 0.91 ± 0.01, and CNR = 3.12 ± 0.07 on DRIVE—statistically significantly superior to all baselines (p < 0.01). Disease detection achieves 87.4% accuracy, 84.3% sensitivity, 90.1% specificity, and AUC = 0.869 on DRIVE, with consistent performance on STARE and CHASEDB1. Conclusions: The luminosity–CLAHE pipeline yields statistically superior results, generalises across three independent datasets, and achieves disease detection accuracy suitable for clinical screening at 0.14 s per image—requiring no training data and no GPU.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools