Open Access
ARTICLE
RetinexWT: Retinex-Based Low-Light Enhancement Method Combining Wavelet Transform
College of Computer Science and Engineering, Chongqing University of Technology, Chongqing, 400054, China
* Corresponding Author: Jianxun Zhang. Email:
(This article belongs to the Special Issue: Computer Vision and Image Processing: Feature Selection, Image Enhancement and Recognition)
Computers, Materials & Continua 2026, 86(2), 1-20. https://doi.org/10.32604/cmc.2025.067041
Received 23 April 2025; Accepted 15 October 2025; Issue published 09 December 2025
Abstract
Low-light image enhancement aims to improve the visibility of severely degraded images captured under insufficient illumination, alleviating the adverse effects of illumination degradation on image quality. Traditional Retinex-based approaches, inspired by human visual perception of brightness and color, decompose an image into illumination and reflectance components to restore fine details. However, their limited capacity for handling noise and complex lighting conditions often leads to distortions and artifacts in the enhanced results, particularly under extreme low-light scenarios. Although deep learning methods built upon Retinex theory have recently advanced the field, most still suffer from insufficient interpretability and sub-optimal enhancement performance. This paper presents RetinexWT, a novel framework that tightly integrates classical Retinex theory with modern deep learning. Following Retinex principles, RetinexWT employs wavelet transforms to estimate illumination maps for brightness adjustment. A detail-recovery module that synergistically combines Vision Transformer (ViT) and wavelet transforms is then introduced to guide the restoration of lost details, thereby improving overall image quality. Within the framework, wavelet decomposition splits input features into high-frequency and low-frequency components, enabling scale-specific processing of global illumination/color cues and fine textures. Furthermore, a gating mechanism selectively fuses down-sampled and up-sampled features, while an attention-based fusion strategy enhances model interpretability. Extensive experiments on the LOL dataset demonstrate that RetinexWT surpasses existing Retinex-oriented deep-learning methods, achieving an average Peak Signal-to-Noise Ratio (PSNR) improvement of 0.22 dB over the current State Of The Art (SOTA), thereby confirming its superiority in low-light image enhancement. Code is available at https:// github.com/CHEN-hJ516/RetinexWT (accessed on 14 October 2025).Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools