Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (9)
  • Open Access

    ARTICLE

    RetinexWT: Retinex-Based Low-Light Enhancement Method Combining Wavelet Transform

    Hongji Chen, Jianxun Zhang*, Tianze Yu, Yingzhu Zeng, Huan Zeng

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-20, 2026, DOI:10.32604/cmc.2025.067041 - 09 December 2025

    Abstract Low-light image enhancement aims to improve the visibility of severely degraded images captured under insufficient illumination, alleviating the adverse effects of illumination degradation on image quality. Traditional Retinex-based approaches, inspired by human visual perception of brightness and color, decompose an image into illumination and reflectance components to restore fine details. However, their limited capacity for handling noise and complex lighting conditions often leads to distortions and artifacts in the enhanced results, particularly under extreme low-light scenarios. Although deep learning methods built upon Retinex theory have recently advanced the field, most still suffer from insufficient interpretability… More >

  • Open Access

    ARTICLE

    M2ATNet: Multi-Scale Multi-Attention Denoising and Feature Fusion Transformer for Low-Light Image Enhancement

    Zhongliang Wei1,*, Jianlong An1, Chang Su2

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-20, 2026, DOI:10.32604/cmc.2025.069335 - 10 November 2025

    Abstract Images taken in dim environments frequently exhibit issues like insufficient brightness, noise, color shifts, and loss of detail. These problems pose significant challenges to dark image enhancement tasks. Current approaches, while effective in global illumination modeling, often struggle to simultaneously suppress noise and preserve structural details, especially under heterogeneous lighting. Furthermore, misalignment between luminance and color channels introduces additional challenges to accurate enhancement. In response to the aforementioned difficulties, we introduce a single-stage framework, M2ATNet, using the multi-scale multi-attention and Transformer architecture. First, to address the problems of texture blurring and residual noise, we design… More >

  • Open Access

    ARTICLE

    Unsupervised Satellite Low-Light Image Enhancement Based on the Improved Generative Adversarial Network

    Ming Chen1,*, Yanfei Niu2, Ping Qi1, Fucheng Wang1

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5015-5035, 2025, DOI:10.32604/cmc.2025.067951 - 23 October 2025

    Abstract This research addresses the critical challenge of enhancing satellite images captured under low-light conditions, which suffer from severely degraded quality, including a lack of detail, poor contrast, and low usability. Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks (e.g., spacecraft on-orbit connection, spacecraft surface repair, space debris capture) that rely on clear visual information. Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions: (1) an improved U-Net (IU-Net) generator with multi-scale feature fusion in the contracting path for richer semantic feature… More >

  • Open Access

    ARTICLE

    You KAN See through the Sand in the Dark: Uncertainty-Aware Meets KAN in Joint Low-Light Image Enhancement and Sand-Dust Removal

    Bingcai Wei1, Hui Liu1,*, Chuang Qian2, Haoliang Shen3, Yibiao Chen3, Yixin Wang3

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5095-5109, 2025, DOI:10.32604/cmc.2025.065812 - 30 July 2025

    Abstract Within the domain of low-level vision, enhancing low-light images and removing sand-dust from single images are both critical tasks. These challenges are particularly pronounced in real-world applications such as autonomous driving, surveillance systems, and remote sensing, where adverse lighting and environmental conditions often degrade image quality. Various neural network models, including MLPs, CNNs, GANs, and Transformers, have been proposed to tackle these challenges, with the Vision KAN models showing particular promise. However, existing models, including the Vision KAN models use deterministic neural networks that do not address the uncertainties inherent in these processes. To overcome… More >

  • Open Access

    ARTICLE

    LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement

    Song Qian, Guzailinuer Yiming, Ping Li, Junfei Yang, Yan Xue, Shuping Zhang*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4069-4091, 2025, DOI:10.32604/cmc.2025.059931 - 06 March 2025

    Abstract Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information. However, in low-light scenarios, the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene. At this time, relying solely on the target saliency information provided by infrared images is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. The method is based on the… More >

  • Open Access

    ARTICLE

    A Transformer Network Combing CBAM for Low-Light Image Enhancement

    Zhefeng Sun*, Chen Wang

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 5205-5220, 2025, DOI:10.32604/cmc.2025.059669 - 06 March 2025

    Abstract Recently, a multitude of techniques that fuse deep learning with Retinex theory have been utilized in the field of low-light image enhancement, yielding remarkable outcomes. Due to the intricate nature of imaging scenarios, including fluctuating noise levels and unpredictable environmental elements, these techniques do not fully resolve these challenges. We introduce an innovative strategy that builds upon Retinex theory and integrates a novel deep network architecture, merging the Convolutional Block Attention Module (CBAM) with the Transformer. Our model is capable of detecting more prominent features across both channel and spatial domains. We have conducted extensive More >

  • Open Access

    ARTICLE

    Unsupervised Low-Light Image Enhancement Based on Explicit Denoising and Knowledge Distillation

    Wenkai Zhang1,2, Hao Zhang1,2, Xianming Liu1, Xiaoyu Guo1,2, Xinzhe Wang1, Shuiwang Li1,2,*

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2537-2554, 2025, DOI:10.32604/cmc.2024.059000 - 17 February 2025

    Abstract Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled… More >

  • Open Access

    ARTICLE

    Retinexformer+: Retinex-Based Dual-Channel Transformer for Low-Light Image Enhancement

    Song Liu1,2, Hongying Zhang1,*, Xue Li1, Xi Yang1,3

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 1969-1984, 2025, DOI:10.32604/cmc.2024.057662 - 17 February 2025

    Abstract Enhancing low-light images with color distortion and uneven multi-light source distribution presents challenges. Most advanced methods for low-light image enhancement are based on the Retinex model using deep learning. Retinexformer introduces channel self-attention mechanisms in the IG-MSA. However, it fails to effectively capture long-range spatial dependencies, leaving room for improvement. Based on the Retinexformer deep learning framework, we designed the Retinexformer+ network. The “+” signifies our advancements in extracting long-range spatial dependencies. We introduced multi-scale dilated convolutions in illumination estimation to expand the receptive field. These convolutions effectively capture the weakening semantic dependency between pixels… More >

  • Open Access

    ARTICLE

    RF-Net: Unsupervised Low-Light Image Enhancement Based on Retinex and Exposure Fusion

    Tian Ma, Chenhui Fu*, Jiayi Yang, Jiehui Zhang, Chuyang Shang

    CMC-Computers, Materials & Continua, Vol.77, No.1, pp. 1103-1122, 2023, DOI:10.32604/cmc.2023.042416 - 31 October 2023

    Abstract Low-light image enhancement methods have limitations in addressing issues such as color distortion, lack of vibrancy, and uneven light distribution and often require paired training data. To address these issues, we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network (RF-Net), which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms. This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images. In the… More >

Displaying 1-10 on page 1 of 9. Per Page