Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.072948
Special Issues
Table of Content

Open Access

ARTICLE

MRFNet: A Progressive Residual Fusion Network for Blind Multiscale Image Deblurring

Wang Zhang1,#, Haozhuo Cao2,#, Qiangqiang Yao1,*
1 School of Mechanical Engineering, Qinghai University, Xining, 810000, China
2 School of Computer Technology and Applications, Qinghai University, Xining, 810000, China
* Corresponding Author: Qiangqiang Yao. Email: email
# These authors contributed to the work equally and should be regarded as co-first authors
(This article belongs to the Special Issue: Advances in Deep Learning and Neural Networks: Architectures, Applications, and Challenges)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.072948

Received 07 September 2025; Accepted 05 November 2025; Published online 02 December 2025

Abstract

Recent advances in deep learning have significantly improved image deblurring; however, existing approaches still suffer from limited global context modeling, inadequate detail restoration, and poor texture or edge perception, especially under complex dynamic blur. To address these challenges, we propose the Multi-Resolution Fusion Network (MRFNet), a blind multi-scale deblurring framework that integrates progressive residual connectivity for hierarchical feature fusion. The network employs a three-stage design: (1) TransformerBlocks capture long-range dependencies and reconstruct coarse global structures; (2) Nonlinear Activation Free Blocks (NAFBlocks) enhance local detail representation and mid-level feature fusion; and (3) an optimized residual subnetwork based on gated feature modulation refines texture and edge details for high-fidelity restoration. Extensive experiments demonstrate that MRFNet achieves superior performance compared to state-of-the-art methods. On GoPro, it attains 32.52 dB Peak Signal-to-Noise Ratio (PSNR) and 0.071 Learned Perceptual Image Patch Similarity (LPIPS), outperforming MIMO-WNet (32.50 dB, 0.075). On HIDE, it achieves 30.25 dB PSNR and 0.945 Structural Similarity Index Measure (SSIM), representing gains of +0.26 dB and +0.015 SSIM over MIMO-UNet (29.99 dB, 0.930). On RealBlur-J, it reaches 28.82 dB PSNR and 0.872 SSIM, surpassing MIMO-UNet by +1.19 dB and +0.035 SSIM (27.63 dB, 0.837). These results validate the effectiveness of the proposed progressive residual fusion and hybrid attention mechanisms in balancing global context understanding and local detail recovery for blind image deblurring.

Keywords

Blind deblurring; progressive network; multi-scale features; residual structure
  • 68

    View

  • 12

    Download

  • 0

    Like

Share Link