Open Access
ARTICLE
DDNet: A Novel Dynamic Lightweight Super-Resolution Algorithm for Arbitrary Scales
1 School of Automation, University of Electronic Science and Technology of China, Chengdu, 610054, China
2 School of Electronic Engineering, Xidian University, Xi’an, 710071, China
3 Department of Geography, Texas A&M University, College Station, TX 77843, USA
4 School of the Environment, The University of Queensland, St Lucia, Brisbane, QLD 4072, Australia
5 School of Artificial Intelligence, Guangzhou Huashang College, Guangzhou, 511300, China
6 Department of Hydrology and Atmospheric Sciences, University of Arizona, Tucson, AZ 85721, USA
* Corresponding Authors: Wenfeng Zheng. Email: ; Lirong Yin. Email:
Computer Modeling in Engineering & Sciences 2025, 145(2), 2223-2252. https://doi.org/10.32604/cmes.2025.072136
Received 20 August 2025; Accepted 15 October 2025; Issue published 26 November 2025
Abstract
Recent Super-Resolution (SR) algorithms often suffer from excessive model complexity, high computational costs, and limited flexibility across varying image scales. To address these challenges, we propose DDNet, a dynamic and lightweight SR framework designed for arbitrary scaling factors. DDNet integrates a residual learning structure with an Adaptively fusion Feature Block (AFB) and a scale-aware upsampling module, effectively reducing parameter overhead while preserving reconstruction quality. Additionally, we introduce DDNetGAN, an enhanced variant that leverages a relativistic Generative Adversarial Network (GAN) to further improve texture realism. To validate the proposed models, we conduct extensive training using the DIV2K and Flickr2K datasets and evaluate performance across standard benchmarks including Set5, Set14, Urban100, Manga109, and BSD100. Our experiments cover both symmetric and asymmetric upscaling factors and incorporate ablation studies to assess key components. Results show that DDNet and DDNetGAN achieve competitive performance compared with mainstream SR algorithms, demonstrating a strong balance between accuracy, efficiency, and flexibility. These findings highlight the potential of our approach for practical real-world super-resolution applications.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools