Open Access iconOpen Access

ARTICLE

crossmark

AMSFuse: Adaptive Multi-Scale Feature Fusion Network for Diabetic Retinopathy Classification

Chengzhang Zhu1,2, Ahmed Alasri1, Tao Xu3, Yalong Xiao1,2,*, Abdulrahman Noman1, Raeed Alsabri1, Xuanchu Duan4, Monir Abdullah5

1 School of Computer Science and Engineering, Central South University, Changsha, 410083, China
2 School of Humanities, Central South University, Changsha, 410083, China
3 Department of Human Anatomy and Histology & Embryology, Basic Medical Sciences, Changsha Health Vocational College, Changsha, 410100, China
4 Glaucoma Institute, Changsha Aier Eye Hospital, Changsha, 410000, China
5 Department of Computer Science and Artificial Intelligence, College of Computing and Information Technology, University of Bisha, Bisha, 67714, Saudi Arabia

* Corresponding Author: Yalong Xiao. Email: email

Computers, Materials & Continua 2025, 82(3), 5153-5167. https://doi.org/10.32604/cmc.2024.058647

Abstract

Globally, diabetic retinopathy (DR) is the primary cause of blindness, affecting millions of people worldwide. This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure prompt diagnosis and effective treatment. Deep learning-based automated diagnosis for diabetic retinopathy can facilitate early detection and treatment. However, traditional deep learning models that focus on local views often learn feature representations that are less discriminative at the semantic level. On the other hand, models that focus on global semantic-level information might overlook critical, subtle local pathological features. To address this issue, we propose an adaptive multi-scale feature fusion network called (AMSFuse), which can adaptively combine multi-scale global and local features without compromising their individual representation. Specifically, our model incorporates global features for extracting high-level contextual information from retinal images. Concurrently, local features capture fine-grained details, such as microaneurysms, hemorrhages, and exudates, which are critical for DR diagnosis. These global and local features are adaptively fused using a fusion block, followed by an Integrated Attention Mechanism (IAM) that refines the fused features by emphasizing relevant regions, thereby enhancing classification accuracy for DR classification. Our model achieves 86.3% accuracy on the APTOS dataset and 96.6% RFMiD, both of which are comparable to state-of-the-art methods.

Keywords

Diabetic retinopathy; multi-scale feature fusion; global features; local features; integrated attention mechanism; retinal images

Cite This Article

APA Style
Zhu, C., Alasri, A., Xu, T., Xiao, Y., Noman, A. et al. (2025). Amsfuse: adaptive multi-scale feature fusion network for diabetic retinopathy classification. Computers, Materials & Continua, 82(3), 5153–5167. https://doi.org/10.32604/cmc.2024.058647
Vancouver Style
Zhu C, Alasri A, Xu T, Xiao Y, Noman A, Alsabri R, et al. Amsfuse: adaptive multi-scale feature fusion network for diabetic retinopathy classification. Comput Mater Contin. 2025;82(3):5153–5167. https://doi.org/10.32604/cmc.2024.058647
IEEE Style
C. Zhu et al., “AMSFuse: Adaptive Multi-Scale Feature Fusion Network for Diabetic Retinopathy Classification,” Comput. Mater. Contin., vol. 82, no. 3, pp. 5153–5167, 2025. https://doi.org/10.32604/cmc.2024.058647



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 621

    View

  • 157

    Download

  • 0

    Like

Share Link