Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (35)
  • Open Access

    ARTICLE

    M2ATNet: Multi-Scale Multi-Attention Denoising and Feature Fusion Transformer for Low-Light Image Enhancement

    Zhongliang Wei1,*, Jianlong An1, Chang Su2

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-20, 2026, DOI:10.32604/cmc.2025.069335 - 10 November 2025

    Abstract Images taken in dim environments frequently exhibit issues like insufficient brightness, noise, color shifts, and loss of detail. These problems pose significant challenges to dark image enhancement tasks. Current approaches, while effective in global illumination modeling, often struggle to simultaneously suppress noise and preserve structural details, especially under heterogeneous lighting. Furthermore, misalignment between luminance and color channels introduces additional challenges to accurate enhancement. In response to the aforementioned difficulties, we introduce a single-stage framework, M2ATNet, using the multi-scale multi-attention and Transformer architecture. First, to address the problems of texture blurring and residual noise, we design… More >

  • Open Access

    ARTICLE

    The Research on Low-Light Autonomous Driving Object Detection Method

    Jianhua Yang*, Zhiwei Lv, Changling Huo

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-18, 2026, DOI:10.32604/cmc.2025.068442 - 10 November 2025

    Abstract Aiming at the scale adaptation of automatic driving target detection algorithms in low illumination environments and the shortcomings in target occlusion processing, this paper proposes a YOLO-LKSDS automatic driving detection model. Firstly, the Contrast-Limited Adaptive Histogram Equalisation (CLAHE) image enhancement algorithm is improved to increase the image contrast and enhance the detailed features of the target; then, on the basis of the YOLOv5 model, the Kmeans++ clustering algorithm is introduced to obtain a suitable anchor frame, and SPPELAN spatial pyramid pooling is improved to enhance the accuracy and robustness of the model for multi-scale target… More >

  • Open Access

    ARTICLE

    Unsupervised Satellite Low-Light Image Enhancement Based on the Improved Generative Adversarial Network

    Ming Chen1,*, Yanfei Niu2, Ping Qi1, Fucheng Wang1

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5015-5035, 2025, DOI:10.32604/cmc.2025.067951 - 23 October 2025

    Abstract This research addresses the critical challenge of enhancing satellite images captured under low-light conditions, which suffer from severely degraded quality, including a lack of detail, poor contrast, and low usability. Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks (e.g., spacecraft on-orbit connection, spacecraft surface repair, space debris capture) that rely on clear visual information. Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions: (1) an improved U-Net (IU-Net) generator with multi-scale feature fusion in the contracting path for richer semantic feature… More >

  • Open Access

    ARTICLE

    Image Enhancement Combined with LLM Collaboration for Low-Contrast Image Character Recognition

    Qin Qin1, Xuan Jiang1,*, Jinhua Jiang1, Dongfang Zhao1, Zimei Tu1, Zhiwei Shen2

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4849-4867, 2025, DOI:10.32604/cmc.2025.067919 - 23 October 2025

    Abstract The effectiveness of industrial character recognition on cast steel is often compromised by factors such as corrosion, surface defects, and low contrast, which hinder the extraction of reliable visual information. The problem is further compounded by the scarcity of large-scale annotated datasets and complex noise patterns in real-world factory environments. This makes conventional OCR techniques and standard deep learning models unreliable. To address these limitations, this study proposes a unified framework that integrates adaptive image preprocessing with collaborative reasoning among LLMs. A Biorthogonal 4.4 (bior4.4) wavelet transform is adaptively tuned using DE to enhance character… More >

  • Open Access

    ARTICLE

    You KAN See through the Sand in the Dark: Uncertainty-Aware Meets KAN in Joint Low-Light Image Enhancement and Sand-Dust Removal

    Bingcai Wei1, Hui Liu1,*, Chuang Qian2, Haoliang Shen3, Yibiao Chen3, Yixin Wang3

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5095-5109, 2025, DOI:10.32604/cmc.2025.065812 - 30 July 2025

    Abstract Within the domain of low-level vision, enhancing low-light images and removing sand-dust from single images are both critical tasks. These challenges are particularly pronounced in real-world applications such as autonomous driving, surveillance systems, and remote sensing, where adverse lighting and environmental conditions often degrade image quality. Various neural network models, including MLPs, CNNs, GANs, and Transformers, have been proposed to tackle these challenges, with the Vision KAN models showing particular promise. However, existing models, including the Vision KAN models use deterministic neural networks that do not address the uncertainties inherent in these processes. To overcome… More >

  • Open Access

    ARTICLE

    Enhancing Military Visual Communication in Harsh Environments Using Computer Vision Techniques

    Shitharth Selvarajan1,2,3,*, Hariprasath Manoharan4, Taher Al-Shehari5, Nasser A Alsadhan6, Subhav Singh7,8

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 3541-3557, 2025, DOI:10.32604/cmc.2025.064394 - 03 July 2025

    Abstract This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities. A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images. Furthermore, the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations. Thus, it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors. The proposed method offers additional robust feature representations by imposing a limiting More >

  • Open Access

    ARTICLE

    A Low Light Image Enhancement Method Based on Dehazing Physical Model

    Wencheng Wang1,2,*, Baoxin Yin1,2, Lei Li2,*, Lun Li1, Hongtao Liu1

    CMES-Computer Modeling in Engineering & Sciences, Vol.143, No.2, pp. 1595-1616, 2025, DOI:10.32604/cmes.2025.063595 - 30 May 2025

    Abstract In low-light environments, captured images often exhibit issues such as insufficient clarity and detail loss, which significantly degrade the accuracy of subsequent target recognition tasks. To tackle these challenges, this study presents a novel low-light image enhancement algorithm that leverages virtual hazy image generation through dehazing models based on statistical analysis. The proposed algorithm initiates the enhancement process by transforming the low-light image into a virtual hazy image, followed by image segmentation using a quadtree method. To improve the accuracy and robustness of atmospheric light estimation, the algorithm incorporates a genetic algorithm to optimize the… More >

  • Open Access

    ARTICLE

    LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement

    Song Qian, Guzailinuer Yiming, Ping Li, Junfei Yang, Yan Xue, Shuping Zhang*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4069-4091, 2025, DOI:10.32604/cmc.2025.059931 - 06 March 2025

    Abstract Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information. However, in low-light scenarios, the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene. At this time, relying solely on the target saliency information provided by infrared images is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. The method is based on the… More >

  • Open Access

    ARTICLE

    An Advanced Bald Eagle Search Algorithm for Image Enhancement

    Pei Hu1, Yibo Han1, Jeng-Shyang Pan2,3,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4485-4501, 2025, DOI:10.32604/cmc.2024.059773 - 06 March 2025

    Abstract Image enhancement utilizes intensity transformation functions to maximize the information content of enhanced images. This paper approaches the topic as an optimization problem and uses the bald eagle search (BES) algorithm to achieve optimal results. In our proposed model, gamma correction and Retinex address color cast issues and enhance image edges and details. The final enhanced image is obtained through color balancing. The BES algorithm seeks the optimal solution through the selection, search, and swooping stages. However, it is prone to getting stuck in local optima and converges slowly. To overcome these limitations, we propose… More >

  • Open Access

    ARTICLE

    A Transformer Network Combing CBAM for Low-Light Image Enhancement

    Zhefeng Sun*, Chen Wang

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 5205-5220, 2025, DOI:10.32604/cmc.2025.059669 - 06 March 2025

    Abstract Recently, a multitude of techniques that fuse deep learning with Retinex theory have been utilized in the field of low-light image enhancement, yielding remarkable outcomes. Due to the intricate nature of imaging scenarios, including fluctuating noise levels and unpredictable environmental elements, these techniques do not fully resolve these challenges. We introduce an innovative strategy that builds upon Retinex theory and integrates a novel deep network architecture, merging the Convolutional Block Attention Module (CBAM) with the Transformer. Our model is capable of detecting more prominent features across both channel and spatial domains. We have conducted extensive More >

Displaying 1-10 on page 1 of 35. Per Page