Open Access iconOpen Access

ARTICLE

DMHFR: Decoder with Multi-Head Feature Receptors for Tract Image Segmentation

Jianuo Huang1,2, Bohan Lai2, Weiye Qiu3, Caixu Xu4, Jie He1,5,*

1 Department of Endoscopy Center, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, 361015, China
2 School of Computing and Data Science, Xiamen University Malaysia, Sepang, 43900, Malaysia
3 School of Computer Science and Techonology, Tongji University, Shanghai, 200092, China
4 Guangxi Key Laboratory of Machine Vision and Intelligent Control, Wuzhou University, Wuzhou, 543002, China
5 Xiamen Clinical Research Center for Cancer Therapy, Xiamen, 361015, China

* Corresponding Author: Jie He. Email: email

Computers, Materials & Continua 2025, 82(3), 4841-4862. https://doi.org/10.32604/cmc.2025.059733

Abstract

The self-attention mechanism of Transformers, which captures long-range contextual information, has demonstrated significant potential in image segmentation. However, their ability to learn local, contextual relationships between pixels requires further improvement. Previous methods face challenges in efficiently managing multi-scale features of different granularities from the encoder backbone, leaving room for improvement in their global representation and feature extraction capabilities. To address these challenges, we propose a novel Decoder with Multi-Head Feature Receptors (DMHFR), which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities: coarse, fine-grained, and full set. These groups are subsequently processed by Multi-Head Feature Receptors (MHFRs) after feature capture and modeling operations. MHFRs include two Three-Head Feature Receptors (THFRs) and one Four-Head Feature Receptor (FHFR). Each group of features is passed through these MHFRs and then fed into axial transformers, which help the model capture long-range dependencies within the features. The three MHFRs produce three distinct feature outputs. The output from the FHFR serves as auxiliary auxiliary features in the prediction head, and the prediction output and their losses will eventually be aggregated. Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts (SOTA) methods on five public datasets. Specifically, it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network (PraNet) method, with gains of 4.1%, 2.2%, 1.4%, 8.9%, and 16.3% on the CVC-ClinicDB, Kvasir-SEG, CVC-T, CVC-ColonDB, and ETIS-LaribPolypDB datasets, respectively.

Keywords

Medical image segmentation; feature exploration; feature aggregation; deep learning; multi-head feature receptor

Cite This Article

APA Style
Huang, J., Lai, B., Qiu, W., Xu, C., He, J. (2025). DMHFR: decoder with multi-head feature receptors for tract image segmentation. Computers, Materials & Continua, 82(3), 4841–4862. https://doi.org/10.32604/cmc.2025.059733
Vancouver Style
Huang J, Lai B, Qiu W, Xu C, He J. DMHFR: decoder with multi-head feature receptors for tract image segmentation. Comput Mater Contin. 2025;82(3):4841–4862. https://doi.org/10.32604/cmc.2025.059733
IEEE Style
J. Huang, B. Lai, W. Qiu, C. Xu, and J. He, “DMHFR: Decoder with Multi-Head Feature Receptors for Tract Image Segmentation,” Comput. Mater. Contin., vol. 82, no. 3, pp. 4841–4862, 2025. https://doi.org/10.32604/cmc.2025.059733



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 319

    View

  • 119

    Download

  • 0

    Like

Share Link