Open Access iconOpen Access

ARTICLE

crossmark

BAID: A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation

Jiajia Liu1,*, Junyi Lin2, Wenxiang Dong2, Xuan Zhao2, Jianhua Liu2, Huiru Li3

1 Faculty Development and Teaching Evaluation Center, Civil Aviation Flight University of China, Guanghan, 618307, China
2 Institute of Electronic and Electrical Engineering, Civil Aviation Flight University of China, Guanghan, 618307, China
3 Flight Training Center of Civil Aviation Flight University of China, Guanghan, 618307, China

* Corresponding Author: Jiajia Liu. Email: email

(This article belongs to the Special Issue: Deep Learning: Emerging Trends, Applications and Research Challenges for Image Recognition)

Computers, Materials & Continua 2026, 86(2), 1-19. https://doi.org/10.32604/cmc.2025.071397

Abstract

Single Image Super-Resolution (SISR) seeks to reconstruct high-resolution (HR) images from low-resolution (LR) inputs, thereby enhancing visual fidelity and the perception of fine details. While Transformer-based models—such as SwinIR, Restormer, and HAT—have recently achieved impressive results in super-resolution tasks by capturing global contextual information, these methods often suffer from substantial computational and memory overhead, which limits their deployment on resource-constrained edge devices. To address these challenges, we propose a novel lightweight super-resolution network, termed Binary Attention-Guided Information Distillation (BAID), which integrates frequency-aware modeling with a binary attention mechanism to significantly reduce computational complexity and parameter count while maintaining strong reconstruction performance. The network combines a high–low frequency decoupling strategy with a local–global attention sharing mechanism, enabling efficient compression of redundant computations through binary attention guidance. At the core of the architecture lies the Attention-Guided Distillation Block (AGDB), which retains the strengths of the information distillation framework while introducing a sparse binary attention module to enhance both inference efficiency and feature representation. Extensive ×4 super-resolution experiments on four standard benchmarks—Set5, Set14, BSD100, and Urban100—demonstrate that BAID achieves Peak Signal-to-Noise Ratio (PSNR) values of 32.13, 28.51, 27.47, and 26.15, respectively, with only 1.22 million parameters and 26.1 G Floating-Point Operations (FLOPs), outperforming other state-of-the-art lightweight methods such as Information Multi-Distillation Network (IMDN) and Residual Feature Distillation Network (RFDN). These results highlight the proposed model’s ability to deliver high-quality image reconstruction while offering strong deployment efficiency, making it well-suited for image restoration tasks in resource-limited environments.

Keywords

Single image super-resolution; lightweight network; binary attention; information distillation

Cite This Article

APA Style
Liu, J., Lin, J., Dong, W., Zhao, X., Liu, J. et al. (2026). BAID: A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation. Computers, Materials & Continua, 86(2), 1–19. https://doi.org/10.32604/cmc.2025.071397
Vancouver Style
Liu J, Lin J, Dong W, Zhao X, Liu J, Li H. BAID: A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation. Comput Mater Contin. 2026;86(2):1–19. https://doi.org/10.32604/cmc.2025.071397
IEEE Style
J. Liu, J. Lin, W. Dong, X. Zhao, J. Liu, and H. Li, “BAID: A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation,” Comput. Mater. Contin., vol. 86, no. 2, pp. 1–19, 2026. https://doi.org/10.32604/cmc.2025.071397



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 806

    View

  • 417

    Download

  • 0

    Like

Share Link