Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.070790
Special Issues
Table of Content

Open Access

ARTICLE

PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion

Xu Tao1, Qiang Xiao2, Zhaoqi Jin2, Hao Li1,*
1 School of Information Science & Engineering, Yunnan University, Kunming, 650504, China
2 Yunnan Highway Network Toll Management Co., Ltd., Yunnan Key Laboratory of Digital Communications, Kunming, 650100, China
* Corresponding Author: Hao Li. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.070790

Received 24 July 2025; Accepted 01 October 2025; Published online 30 October 2025

Abstract

Image fusion technology aims to generate a more informative single image by integrating complementary information from multi-modal images. Despite the significant progress of deep learning-based fusion methods, existing algorithms are often limited to single or dual-dimensional feature interactions, thus struggling to fully exploit the profound complementarity between multi-modal images. To address this, this paper proposes a parallel multi-dimensional complementary fusion network, termed PMCFusion, for the task of infrared and visible image fusion. The core of this method is its unique parallel three-branch fusion module, PTFM, which pioneers the parallel synergistic perception and efficient integration of three distinct dimensions: spatial uncorrelation, channel-wise disparity, and frequency-domain complementarity. Leveraging meticulously designed cross-dimensional attention interactions, PTFM can selectively enhance multi-dimensional features to achieve deep complementarity. Furthermore, to enhance the detail clarity and structural integrity of the fused image, we have designed a dedicated multi-scale high-frequency detail enhancement module, HFDEM. It effectively improves the clarity of the fused image by actively extracting, enhancing, and injecting high-frequency components in a residual manner. The overall model employs a multi-scale architecture and is constrained by corresponding loss functions to ensure efficient and robust fusion across different resolutions. Extensive experimental results demonstrate that the proposed method significantly outperforms current state-of-the-art fusion algorithms in both subjective visual effects and objective evaluation metrics.

Keywords

Infrared and visible image fusion; deep learning; parallel multi-dimensional; attention mechanism; detail enhancement
  • 178

    View

  • 25

    Download

  • 0

    Like

Share Link