Open Access
ARTICLE
Multi-Constraint Generative Adversarial Network-Driven Optimization Method for Super-Resolution Reconstruction of Remote Sensing Images
School of Information Science and Technology, Beijing Forestry University, Beijing, 100083, China
* Corresponding Author: Guangpeng Fan. Email:
(This article belongs to the Special Issue: Computer Vision and Image Processing: Feature Selection, Image Enhancement and Recognition)
Computers, Materials & Continua 2026, 86(1), 1-18. https://doi.org/10.32604/cmc.2025.068309
Received 25 May 2025; Accepted 31 July 2025; Issue published 10 November 2025
Abstract
Remote sensing image super-resolution technology is pivotal for enhancing image quality in critical applications including environmental monitoring, urban planning, and disaster assessment. However, traditional methods exhibit deficiencies in detail recovery and noise suppression, particularly when processing complex landscapes (e.g., forests, farmlands), leading to artifacts and spectral distortions that limit practical utility. To address this, we propose an enhanced Super-Resolution Generative Adversarial Network (SRGAN) framework featuring three key innovations: (1) Replacement of L1/L2 loss with a robust Charbonnier loss to suppress noise while preserving edge details via adaptive gradient balancing; (2) A multi-loss joint optimization strategy dynamically weighting Charbonnier loss (β = 0.5), Visual Geometry Group (VGG) perceptual loss (α = 1), and adversarial loss (γ = 0.1) to synergize pixel-level accuracy and perceptual quality; (3) A multi-scale residual network (MSRN) capturing cross-scale texture features (e.g., forest canopies, mountain contours). Validated on Sentinel-2 (10 m) and SPOT-6/7 (2.5 m) datasets covering 904 km2 in Motuo County, Tibet, our method outperforms the SRGAN baseline (SR4RS) with Peak Signal-to-Noise Ratio (PSNR) gains of 0.29 dB and Structural Similarity Index (SSIM) improvements of 3.08% on forest imagery. Visual comparisons confirm enhanced texture continuity despite marginal Learned Perceptual Image Patch Similarity (LPIPS) increases. The method significantly improves noise robustness and edge retention in complex geomorphology, demonstrating 18% faster response in forest fire early warning and providing high-resolution support for agricultural/urban monitoring. Future work will integrate spectral constraints and lightweight architectures.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools