Open Access iconOpen Access

ARTICLE

LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement

Song Qian, Guzailinuer Yiming, Ping Li, Junfei Yang, Yan Xue, Shuping Zhang*

Faculty of Information Engineering, Xinjiang Institute of Technology, Aksu, 843100, China

* Corresponding Author: Shuping Zhang. Email: email

Computers, Materials & Continua 2025, 82(3), 4069-4091. https://doi.org/10.32604/cmc.2025.059931

Abstract

Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information. However, in low-light scenarios, the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene. At this time, relying solely on the target saliency information provided by infrared images is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. The method is based on the improvement of the MobileOne Block, using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images. The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm is used for image enhancement of both infrared and visible light images, guiding the network model to learn low-light enhancement capabilities through enhancement loss. Upon completion of network training, the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization, effectively reducing computational resource consumption. Finally, after extensive experimental comparisons, our method achieved improvements of 4.6%, 40.5%, 156.9%, 9.2%, and 98.6% in the evaluation metrics Standard Deviation (SD), Visual Information Fidelity (VIF), Entropy (EN), and Spatial Frequency (SF), respectively, compared to the best results of the compared algorithms, while only being 1.5 ms/it slower in computation speed than the fastest method.

Keywords

Infrared images; image fusion; low-light enhancement; feature extraction; computational resource optimization

Cite This Article

APA Style
Qian, S., Yiming, G., Li, P., Yang, J., Xue, Y. et al. (2025). Lle-fuse: lightweight infrared and visible light image fusion based on low-light image enhancement. Computers, Materials & Continua, 82(3), 4069–4091. https://doi.org/10.32604/cmc.2025.059931
Vancouver Style
Qian S, Yiming G, Li P, Yang J, Xue Y, Zhang S. Lle-fuse: lightweight infrared and visible light image fusion based on low-light image enhancement. Comput Mater Contin. 2025;82(3):4069–4091. https://doi.org/10.32604/cmc.2025.059931
IEEE Style
S. Qian, G. Yiming, P. Li, J. Yang, Y. Xue, and S. Zhang, “LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement,” Comput. Mater. Contin., vol. 82, no. 3, pp. 4069–4091, 2025. https://doi.org/10.32604/cmc.2025.059931



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 316

    View

  • 112

    Download

  • 0

    Like

Share Link