Open Access iconOpen Access

ARTICLE

crossmark

Multi-Constraint Generative Adversarial Network-Driven Optimization Method for Super-Resolution Reconstruction of Remote Sensing Images

Binghong Zhang, Jialing Zhou, Xinye Zhou, Jia Zhao, Jinchun Zhu, Guangpeng Fan*

School of Information Science and Technology, Beijing Forestry University, Beijing, 100083, China

* Corresponding Author: Guangpeng Fan. Email: email

(This article belongs to the Special Issue: Computer Vision and Image Processing: Feature Selection, Image Enhancement and Recognition)

Computers, Materials & Continua 2026, 86(1), 1-18. https://doi.org/10.32604/cmc.2025.068309

Abstract

Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring, urban planning, and disaster assessment. However, traditional methods exhibit deficiencies in detail recovery and noise suppression, particularly when processing complex landscapes (e.g., forests, farmlands), leading to artifacts and spectral distortions that limit practical utility. To address this, we propose an enhanced Super-Resolution Generative Adversarial Network (SRGAN) framework featuring three key innovations: (1) Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing; (2) A multi-loss joint optimization strategy dynamically weighting Charbonnier loss (β = 0.5), Visual Geometry Group (VGG) perceptual loss (α = 1), and adversarial loss (γ = 0.1) to synergize pixel-level accuracy and perceptual quality; (3) A multi-scale residual network (MSRN) capturing cross-scale texture features (e.g., forest canopies, mountain contours). Validated on Sentinel-2 (10 m) and SPOT-6/7 (2.5 m) datasets covering 904 km2 in Motuo County, Tibet, our method outperforms the SRGAN baseline (SR4RS) with Peak Signal-to-Noise Ratio (PSNR) gains of 0.29 dB and Structural Similarity Index (SSIM) improvements of 3.08% on forest imagery. Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity (LPIPS) increases. The method significantly improves noise robustness and edge retention in complex geomorphology, demonstrating 18% faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring. Future work will integrate spectral constraints and lightweight architectures.

Keywords

Charbonnier loss function; deep learning; generative adversarial network; perceptual loss; remote sensing image super-resolution

Cite This Article

APA Style
Zhang, B., Zhou, J., Zhou, X., Zhao, J., Zhu, J. et al. (2026). Multi-Constraint Generative Adversarial Network-Driven Optimization Method for Super-Resolution Reconstruction of Remote Sensing Images. Computers, Materials & Continua, 86(1), 1–18. https://doi.org/10.32604/cmc.2025.068309
Vancouver Style
Zhang B, Zhou J, Zhou X, Zhao J, Zhu J, Fan G. Multi-Constraint Generative Adversarial Network-Driven Optimization Method for Super-Resolution Reconstruction of Remote Sensing Images. Comput Mater Contin. 2026;86(1):1–18. https://doi.org/10.32604/cmc.2025.068309
IEEE Style
B. Zhang, J. Zhou, X. Zhou, J. Zhao, J. Zhu, and G. Fan, “Multi-Constraint Generative Adversarial Network-Driven Optimization Method for Super-Resolution Reconstruction of Remote Sensing Images,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–18, 2026. https://doi.org/10.32604/cmc.2025.068309



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 699

    View

  • 238

    Download

  • 0

    Like

Share Link