Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.071397
Special Issues
Table of Content

Open Access

ARTICLE

BAID: A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation

Jiajia Liu1,*, Junyi Lin2, Wenxiang Dong2, Xuan Zhao2, Jianhua Liu2, Huiru Li3
1 Faculty Development and Teaching Evaluation Center, Civil Aviation Flight University of China, Guanghan, 618307, China
2 Institute of Electronic and Electrical Engineering, Civil Aviation Flight University of China, Guanghan, 618307, China
3 Flight Training Center of Civil Aviation Flight University of China, Guanghan, 618307, China
* Corresponding Author: Jiajia Liu. Email: email
(This article belongs to the Special Issue: Deep Learning: Emerging Trends, Applications and Research Challenges for Image Recognition)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.071397

Received 05 August 2025; Accepted 26 September 2025; Published online 27 October 2025

Abstract

Single Image Super-Resolution (SISR) seeks to reconstruct high-resolution (HR) images from low-resolution (LR) inputs, thereby enhancing visual fidelity and the perception of fine details. While Transformer-based models—such as SwinIR, Restormer, and HAT—have recently achieved impressive results in super-resolution tasks by capturing global contextual information, these methods often suffer from substantial computational and memory overhead, which limits their deployment on resource-constrained edge devices. To address these challenges, we propose a novel lightweight super-resolution network, termed Binary Attention-Guided Information Distillation (BAID), which integrates frequency-aware modeling with a binary attention mechanism to significantly reduce computational complexity and parameter count while maintaining strong reconstruction performance. The network combines a high–low frequency decoupling strategy with a local–global attention sharing mechanism, enabling efficient compression of redundant computations through binary attention guidance. At the core of the architecture lies the Attention-Guided Distillation Block (AGDB), which retains the strengths of the information distillation framework while introducing a sparse binary attention module to enhance both inference efficiency and feature representation. Extensive ×4 super-resolution experiments on four standard benchmarks—Set5, Set14, BSD100, and Urban100—demonstrate that BAID achieves Peak Signal-to-Noise Ratio (PSNR) values of 32.13, 28.51, 27.47, and 26.15, respectively, with only 1.22 million parameters and 26.1 G Floating-Point Operations (FLOPs), outperforming other state-of-the-art lightweight methods such as Information Multi-Distillation Network (IMDN) and Residual Feature Distillation Network (RFDN). These results highlight the proposed model’s ability to deliver high-quality image reconstruction while offering strong deployment efficiency, making it well-suited for image restoration tasks in resource-limited environments.

Keywords

Single image super-resolution; lightweight network; binary attention; information distillation
  • 521

    View

  • 286

    Download

  • 1

    Like

Share Link