Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.068880
Special Issues
Table of Content

Open Access

ARTICLE

A Super-Resolution Generative Adversarial Network for Remote Sensing Images Based on Improved Residual Module and Attention Mechanism

Yifan Zhang1, Yong Gan2,*, Mengke Tang1, Xinxin Gan3
1 School of Computer Science and Technology, Zhengzhou University of Light Industry, Zhengzhou, 450001, China
2 School of Information Engineering, Zhengzhou University of Technology, Zhengzhou, 450044, China
3 Digital and Intelligent Engineering Design Institute, SIPPR Engineering Group Co., Ltd., Zhengzhou, 450007, China
* Corresponding Author: Yong Gan. Email: email
(This article belongs to the Special Issue: Advances in Deep Learning and Neural Networks: Architectures, Applications, and Challenges)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.068880

Received 09 June 2025; Accepted 01 September 2025; Published online 23 September 2025

Abstract

High-resolution remote sensing imagery is essential for critical applications such as precision agriculture, urban management planning, and military reconnaissance. Although significant progress has been made in single-image super-resolution (SISR) using generative adversarial networks (GANs), existing approaches still face challenges in recovering high-frequency details, effectively utilizing features, maintaining structural integrity, and ensuring training stability—particularly when dealing with the complex textures characteristic of remote sensing imagery. To address these limitations, this paper proposes the Improved Residual Module and Attention Mechanism Network (IRMANet), a novel architecture specifically designed for remote sensing image reconstruction. IRMANet builds upon the Super-Resolution Generative Adversarial Network (SRGAN) framework and introduces several key innovations. First, the Enhanced Residual Unit (ERU) enhances feature reuse and stabilizes training through deep residual connections. Second, the Self-Attention Residual Block (SARB) incorporates a self-attention mechanism into the Improved Residual Module (IRM) to effectively model long-range dependencies and automatically emphasize salient features. Additionally, the IRM adopts a multi-scale feature fusion strategy to facilitate synergistic interactions between local detail and global semantic information. The effectiveness of each component is validated through ablation studies, while comprehensive comparative experiments on standard remote sensing datasets demonstrate that IRMANet significantly outperforms both the baseline and state-of-the-art methods in terms of perceptual quality and quantitative metrics. Specifically, compared to the baseline model, at a magnification factor of 2, IRMANet achieves an improvement of 0.24 dB in peak signal-to-noise ratio (PSNR) and 0.54 in structural similarity index (SSIM); at a magnification factor of 4, it achieves gains of 0.22 dB in PSNR and 0.51 in SSIM. These results confirm that the proposed method effectively enhances detail representation and structural reconstruction accuracy in complex remote sensing scenarios, offering robust technical support for high-precision detection and identification of both military and civilian aircraft.

Keywords

Remote sensing imagery; generative adversarial networks; super-resolution; enhanced residual unit; self-attention mechanism
  • 788

    View

  • 408

    Download

  • 0

    Like

Share Link