Open Access iconOpen Access

ARTICLE

PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion

Xu Tao1, Qiang Xiao2, Zhaoqi Jin2, Hao Li1,*

1 School of Information Science & Engineering, Yunnan University, Kunming, 650504, China
2 Yunnan Highway Network Toll Management Co., Ltd., Yunnan Key Laboratory of Digital Communications, Kunming, 650100, China

* Corresponding Author: Hao Li. Email: email

Computers, Materials & Continua 2026, 86(2), 1-18. https://doi.org/10.32604/cmc.2025.070790

Abstract

Image fusion technology aims to generate a more informative single image by integrating complementary information from multi-modal images. Despite the significant progress of deep learning-based fusion methods, existing algorithms are often limited to single or dual-dimensional feature interactions, thus struggling to fully exploit the profound complementarity between multi-modal images. To address this, this paper proposes a parallel multi-dimensional complementary fusion network, termed PMCFusion, for the task of infrared and visible image fusion. The core of this method is its unique parallel three-branch fusion module, PTFM, which pioneers the parallel synergistic perception and efficient integration of three distinct dimensions: spatial uncorrelation, channel-wise disparity, and frequency-domain complementarity. Leveraging meticulously designed cross-dimensional attention interactions, PTFM can selectively enhance multi-dimensional features to achieve deep complementarity. Furthermore, to enhance the detail clarity and structural integrity of the fused image, we have designed a dedicated multi-scale high-frequency detail enhancement module, HFDEM. It effectively improves the clarity of the fused image by actively extracting, enhancing, and injecting high-frequency components in a residual manner. The overall model employs a multi-scale architecture and is constrained by corresponding loss functions to ensure efficient and robust fusion across different resolutions. Extensive experimental results demonstrate that the proposed method significantly outperforms current state-of-the-art fusion algorithms in both subjective visual effects and objective evaluation metrics.

Keywords

Infrared and visible image fusion; deep learning; parallel multi-dimensional; attention mechanism; detail enhancement

Cite This Article

APA Style
Tao, X., Xiao, Q., Jin, Z., Li, H. (2026). PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion. Computers, Materials & Continua, 86(2), 1–18. https://doi.org/10.32604/cmc.2025.070790
Vancouver Style
Tao X, Xiao Q, Jin Z, Li H. PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion. Comput Mater Contin. 2026;86(2):1–18. https://doi.org/10.32604/cmc.2025.070790
IEEE Style
X. Tao, Q. Xiao, Z. Jin, and H. Li, “PMCFusion: A Parallel Multi-Dimensional Complementary Network for Infrared and Visible Image Fusion,” Comput. Mater. Contin., vol. 86, no. 2, pp. 1–18, 2026. https://doi.org/10.32604/cmc.2025.070790



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 224

    View

  • 54

    Download

  • 0

    Like

Share Link