Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    ARTICLE

    PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion

    Xu Tao1, Qiang Xiao2, Zhaoqi Jin2, Hao Li1,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-18, 2026, DOI:10.32604/cmc.2025.070790 - 09 December 2025

    Abstract Image fusion technology aims to generate a more informative single image by integrating complementary information from multi-modal images. Despite the significant progress of deep learning-based fusion methods, existing algorithms are often limited to single or dual-dimensional feature interactions, thus struggling to fully exploit the profound complementarity between multi-modal images. To address this, this paper proposes a parallel multi-dimensional complementary fusion network, termed PMCFusion, for the task of infrared and visible image fusion. The core of this method is its unique parallel three-branch fusion module, PTFM, which pioneers the parallel synergistic perception and efficient integration of… More >

  • Open Access

    ARTICLE

    An Infrared-Visible Image Fusion Network with Channel-Switching for Low-Light Object Detection

    Tianzhe Jiao, Yuming Chen, Xiaoyue Feng, Chaopeng Guo, Jie Song*

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2681-2700, 2025, DOI:10.32604/cmc.2025.069235 - 23 September 2025

    Abstract Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images. However, the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging. Furthermore, constrained by the physical characteristics of sensors and thermal diffusion effects, infrared images generally suffer from blurred object contours and missing details, making it difficult to extract object features effectively. To address these issues, we propose an infrared-visible image fusion network that realizes multimodal information fusion… More >

  • Open Access

    ARTICLE

    Transformer-Based Fusion of Infrared and Visible Imagery for Smoke Recognition in Commercial Areas

    Chongyang Wang1, Qiongyan Li1, Shu Liu2, Pengle Cheng1,*, Ying Huang3

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5157-5176, 2025, DOI:10.32604/cmc.2025.067367 - 30 July 2025

    Abstract With rapid urbanization, fires pose significant challenges in urban governance. Traditional fire detection methods often struggle to detect smoke in complex urban scenes due to environmental interferences and variations in viewing angles. This study proposes a novel multimodal smoke detection method that fuses infrared and visible imagery using a transformer-based deep learning model. By capturing both thermal and visual cues, our approach significantly enhances the accuracy and robustness of smoke detection in business parks scenes. We first established a dual-view dataset comprising infrared and visible light videos, implemented an innovative image feature fusion strategy, and More >

  • Open Access

    ARTICLE

    A Mask-Guided Latent Low-Rank Representation Method for Infrared and Visible Image Fusion

    Kezhen Xie1,2, Syed Mohd Zahid Syed Zainal Ariffin1,*, Muhammad Izzad Ramli1

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 997-1011, 2025, DOI:10.32604/cmc.2025.063469 - 09 June 2025

    Abstract Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images. However, existing methods often fail to distinguish salient objects from background regions, leading to detail suppression in salient regions due to global fusion strategies. This study presents a mask-guided latent low-rank representation fusion method to address this issue. First, the GrabCut algorithm is employed to extract a saliency mask, distinguishing salient regions from background regions. Then, latent low-rank representation (LatLRR) is applied to extract deep image features, enhancing More >

  • Open Access

    ARTICLE

    BDPartNet: Feature Decoupling and Reconstruction Fusion Network for Infrared and Visible Image

    Xuejie Wang1, Jianxun Zhang1,*, Ye Tao2, Xiaoli Yuan1, Yifan Guo1

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4621-4639, 2024, DOI:10.32604/cmc.2024.051556 - 20 June 2024

    Abstract While single-modal visible light images or infrared images provide limited information, infrared light captures significant thermal radiation data, whereas visible light excels in presenting detailed texture information. Combining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations, resulting in high-quality images with enhanced contrast and rich texture details. Such capabilities hold promising applications in advanced visual tasks including target detection, instance segmentation, military surveillance, pedestrian detection, among others. This paper introduces a novel approach, a dual-branch decomposition fusion network based on AutoEncoder (AE), which decomposes multi-modal features into intensity… More >

  • Open Access

    ARTICLE

    Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding

    Chunming Wu1, Wukai Liu2,*, Xin Ma3

    CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 1441-1461, 2024, DOI:10.32604/cmc.2024.048136 - 25 April 2024

    Abstract A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvement module. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy More >

  • Open Access

    ARTICLE

    Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network

    Kanika Bhalla1, Deepika Koundal2,*, Surbhi Bhatia3, Mohammad Khalid Imam Rahmani4, Muhammad Tahir4

    CMC-Computers, Materials & Continua, Vol.70, No.3, pp. 5503-5518, 2022, DOI:10.32604/cmc.2022.021125 - 11 October 2021

    Abstract Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared (IR)/visible (VS) images. Dissimilarities in various kind of features in these images are vital to preserve in the single fused image. Hence, simultaneous preservation of both the aspects at the same time is a challenging task. However, most of the existing methods utilize the manual extraction of features; and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image. Therefore, this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.… More >

  • Open Access

    ARTICLE

    Facial Expression Recognition Based on the Fusion of Infrared and Visible Image

    Jiancheng Zou1, Jiaxin Li1,*, Juncun Wei1, Zhengzheng Li1, Xin Yang2

    Journal on Artificial Intelligence, Vol.3, No.3, pp. 123-134, 2021, DOI:10.32604/jai.2021.027069 - 25 January 2022

    Abstract Facial expression recognition is a research hot spot in the fields of computer vision and pattern recognition. However, the existing facial expression recognition models are mainly concentrated in the visible light environment. They have insufficient generalization ability and low recognition accuracy, and are vulnerable to environmental changes such as illumination and distance. In order to solve these problems, we combine the advantages of the infrared and visible images captured simultaneously by array equipment our developed with two infrared and two visible lens, so that the fused image not only has the texture information of visible… More >

  • Open Access

    ARTICLE

    Infrared and Visible Image Fusion Based on NSST and RDN

    Peizhou Yan1, Jiancheng Zou2,*, Zhengzheng Li1, Xin Yang3

    Intelligent Automation & Soft Computing, Vol.28, No.1, pp. 213-225, 2021, DOI:10.32604/iasc.2021.016201 - 17 March 2021

    Abstract Within the application of driving assistance systems, the detection of driver’s facial features in the cab for a spectrum of luminosities is mission critical. One method that addresses this concern is infrared and visible image fusion. Its purpose is to generate an aggregate image which can granularly and systematically illustrate scene details in a range of lighting conditions. Our study introduces a novel approach to this method with marked improvements. We utilize non-subsampled shearlet transform (NSST) to obtain the low and high frequency sub-bands of infrared and visible imagery. For the low frequency sub-band fusion,… More >

  • Open Access

    ARTICLE

    Intelligent Fusion of Infrared and Visible Image Data Based on Convolutional Sparse Representation and Improved Pulse-Coupled Neural Network

    Jingming Xia1, Yi Lu1, Ling Tan2,*, Ping Jiang3

    CMC-Computers, Materials & Continua, Vol.67, No.1, pp. 613-624, 2021, DOI:10.32604/cmc.2021.013457 - 12 January 2021

    Abstract Multi-source information can be obtained through the fusion of infrared images and visible light images, which have the characteristics of complementary information. However, the existing acquisition methods of fusion images have disadvantages such as blurred edges, low contrast, and loss of details. Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform (NSST). Furthermore, the low-frequency subbands were fused by convolutional sparse representation (CSR), and the high-frequency subbands were fused by an improved pulse More >

Displaying 1-10 on page 1 of 10. Per Page