Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (12)
  • Open Access

    ARTICLE

    FDEFusion: End-to-End Infrared and Visible Image Fusion Method Based on Frequency Decomposition and Enhancement

    Ming Chen1,*, Guoqiang Ma2, Ping Qi1, Fucheng Wang1, Lin Shen3, Xiaoya Pi1

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.072623 - 10 February 2026

    Abstract In the image fusion field, fusing infrared images (IRIs) and visible images (VIs) excelled is a key area. The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image. Accordingly, efficiently combining the advantages of both images while overcoming their shortcomings is necessary. To handle this challenge, we developed an end-to-end IRI and VI fusion method based on frequency decomposition and enhancement. By applying concepts from frequency domain analysis, we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information… More >

  • Open Access

    ARTICLE

    PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion

    Xu Tao1, Qiang Xiao2, Zhaoqi Jin2, Hao Li1,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-18, 2026, DOI:10.32604/cmc.2025.070790 - 09 December 2025

    Abstract Image fusion technology aims to generate a more informative single image by integrating complementary information from multi-modal images. Despite the significant progress of deep learning-based fusion methods, existing algorithms are often limited to single or dual-dimensional feature interactions, thus struggling to fully exploit the profound complementarity between multi-modal images. To address this, this paper proposes a parallel multi-dimensional complementary fusion network, termed PMCFusion, for the task of infrared and visible image fusion. The core of this method is its unique parallel three-branch fusion module, PTFM, which pioneers the parallel synergistic perception and efficient integration of… More >

  • Open Access

    ARTICLE

    Transformer-Based Fusion of Infrared and Visible Imagery for Smoke Recognition in Commercial Areas

    Chongyang Wang1, Qiongyan Li1, Shu Liu2, Pengle Cheng1,*, Ying Huang3

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5157-5176, 2025, DOI:10.32604/cmc.2025.067367 - 30 July 2025

    Abstract With rapid urbanization, fires pose significant challenges in urban governance. Traditional fire detection methods often struggle to detect smoke in complex urban scenes due to environmental interferences and variations in viewing angles. This study proposes a novel multimodal smoke detection method that fuses infrared and visible imagery using a transformer-based deep learning model. By capturing both thermal and visual cues, our approach significantly enhances the accuracy and robustness of smoke detection in business parks scenes. We first established a dual-view dataset comprising infrared and visible light videos, implemented an innovative image feature fusion strategy, and More >

  • Open Access

    ARTICLE

    A Mask-Guided Latent Low-Rank Representation Method for Infrared and Visible Image Fusion

    Kezhen Xie1,2, Syed Mohd Zahid Syed Zainal Ariffin1,*, Muhammad Izzad Ramli1

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 997-1011, 2025, DOI:10.32604/cmc.2025.063469 - 09 June 2025

    Abstract Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images. However, existing methods often fail to distinguish salient objects from background regions, leading to detail suppression in salient regions due to global fusion strategies. This study presents a mask-guided latent low-rank representation fusion method to address this issue. First, the GrabCut algorithm is employed to extract a saliency mask, distinguishing salient regions from background regions. Then, latent low-rank representation (LatLRR) is applied to extract deep image features, enhancing More >

  • Open Access

    ARTICLE

    LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement

    Song Qian, Guzailinuer Yiming, Ping Li, Junfei Yang, Yan Xue, Shuping Zhang*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4069-4091, 2025, DOI:10.32604/cmc.2025.059931 - 06 March 2025

    Abstract Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information. However, in low-light scenarios, the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene. At this time, relying solely on the target saliency information provided by infrared images is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. The method is based on the… More >

  • Open Access

    ARTICLE

    CAEFusion: A New Convolutional Autoencoder-Based Infrared and Visible Light Image Fusion Algorithm

    Chun-Ming Wu1, Mei-Ling Ren2,*, Jin Lei2, Zi-Mu Jiang3

    CMC-Computers, Materials & Continua, Vol.80, No.2, pp. 2857-2872, 2024, DOI:10.32604/cmc.2024.053708 - 15 August 2024

    Abstract To address the issues of incomplete information, blurred details, loss of details, and insufficient contrast in infrared and visible image fusion, an image fusion algorithm based on a convolutional autoencoder is proposed. The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map. A multi-scale convolution attention module is suggested to enhance the communication of feature information. At the same time, the feature transformation module is introduced to learn more robust feature representations, aiming to preserve the integrity of… More >

  • Open Access

    ARTICLE

    BDPartNet: Feature Decoupling and Reconstruction Fusion Network for Infrared and Visible Image

    Xuejie Wang1, Jianxun Zhang1,*, Ye Tao2, Xiaoli Yuan1, Yifan Guo1

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4621-4639, 2024, DOI:10.32604/cmc.2024.051556 - 20 June 2024

    Abstract While single-modal visible light images or infrared images provide limited information, infrared light captures significant thermal radiation data, whereas visible light excels in presenting detailed texture information. Combining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations, resulting in high-quality images with enhanced contrast and rich texture details. Such capabilities hold promising applications in advanced visual tasks including target detection, instance segmentation, military surveillance, pedestrian detection, among others. This paper introduces a novel approach, a dual-branch decomposition fusion network based on AutoEncoder (AE), which decomposes multi-modal features into intensity… More >

  • Open Access

    ARTICLE

    Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding

    Chunming Wu1, Wukai Liu2,*, Xin Ma3

    CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 1441-1461, 2024, DOI:10.32604/cmc.2024.048136 - 25 April 2024

    Abstract A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvement module. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy More >

  • Open Access

    ARTICLE

    Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network

    Kanika Bhalla1, Deepika Koundal2,*, Surbhi Bhatia3, Mohammad Khalid Imam Rahmani4, Muhammad Tahir4

    CMC-Computers, Materials & Continua, Vol.70, No.3, pp. 5503-5518, 2022, DOI:10.32604/cmc.2022.021125 - 11 October 2021

    Abstract Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared (IR)/visible (VS) images. Dissimilarities in various kind of features in these images are vital to preserve in the single fused image. Hence, simultaneous preservation of both the aspects at the same time is a challenging task. However, most of the existing methods utilize the manual extraction of features; and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image. Therefore, this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.… More >

  • Open Access

    ARTICLE

    Facial Expression Recognition Based on the Fusion of Infrared and Visible Image

    Jiancheng Zou1, Jiaxin Li1,*, Juncun Wei1, Zhengzheng Li1, Xin Yang2

    Journal on Artificial Intelligence, Vol.3, No.3, pp. 123-134, 2021, DOI:10.32604/jai.2021.027069 - 25 January 2022

    Abstract Facial expression recognition is a research hot spot in the fields of computer vision and pattern recognition. However, the existing facial expression recognition models are mainly concentrated in the visible light environment. They have insufficient generalization ability and low recognition accuracy, and are vulnerable to environmental changes such as illumination and distance. In order to solve these problems, we combine the advantages of the infrared and visible images captured simultaneously by array equipment our developed with two infrared and two visible lens, so that the fused image not only has the texture information of visible… More >

Displaying 1-10 on page 1 of 12. Per Page