Open Access iconOpen Access

ARTICLE

crossmark

Modified Anam-Net Based Lightweight Deep Learning Model for Retinal Vessel Segmentation

Syed Irtaza Haider1, Khursheed Aurangzeb2,*, Musaed Alhussein2

1 College of Computer and Information Sciences, King Saud University, Riyadh, 11543, Saudi Arabia
2 Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11543, Saudi Arabia

* Corresponding Author: Khursheed Aurangzeb. Email: email

Computers, Materials & Continua 2022, 73(1), 1501-1526. https://doi.org/10.32604/cmc.2022.025479

Abstract

The accurate segmentation of retinal vessels is a challenging task due to the presence of various pathologies as well as the low-contrast of thin vessels and non-uniform illumination. In recent years, encoder-decoder networks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we propose a lightweight convolutional neural network (CNN)-based encoder-decoder deep learning model for accurate retinal vessels segmentation. The proposed deep learning model consists of encoder-decoder architecture along with bottleneck layers that consist of depth-wise squeezing, followed by full-convolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, which was tested on CT images for COVID-19 identification. For our lightweight model, we used a stack of two 3 × 3 convolution layers (without spatial pooling in between) instead of a single 3 × 3 convolution layer as proposed in Anam-Net to increase the receptive field and to reduce the trainable parameters. The proposed method includes fewer filters in all convolutional layers than the original Anam-Net and does not have an increasing number of filters for decreasing resolution. These modifications do not compromise on the segmentation accuracy, but they do make the architecture significantly lighter in terms of the number of trainable parameters and computation time. The proposed architecture has comparatively fewer parameters (1.01M) than Anam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the other recent works. The proposed model does not require any problem-specific pre- or post-processing, nor does it rely on handcrafted features. In addition, the attribute of being efficient in terms of segmentation accuracy as well as lightweight makes the proposed method a suitable candidate to be used in the screening platforms at the point of care. We evaluated our proposed model on open-access datasets namely, DRIVE, STARE, and CHASE_DB. The experimental results show that the proposed model outperforms several state-of-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoder-decoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the area under the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and 0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and 0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752, and 0.9906} on the CHASE_DB dataset. Additionally, we perform cross-training experiments on the DRIVE and STARE datasets. The result of this experiment indicates the generalization ability and robustness of the proposed model.

Keywords


Cite This Article

S. Irtaza Haider, K. Aurangzeb and M. Alhussein, "Modified anam-net based lightweight deep learning model for retinal vessel segmentation," Computers, Materials & Continua, vol. 73, no.1, pp. 1501–1526, 2022. https://doi.org/10.32604/cmc.2022.025479



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1260

    View

  • 770

    Download

  • 0

    Like

Share Link