Open Access iconOpen Access

ARTICLE

crossmark

SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention

Seyong Jin1, Muhammad Fayaz2, L. Minh Dang3, Hyoung-Kyu Song3, Hyeonjoon Moon2,*

1 Department of Artificial Intelligence, Sejong University, Seoul, 05006, Republic of Korea
2 Department of Computer Science and Engineering, Sejong University, Seoul, 05006, Republic of Korea
3 Department of Information and Communication Engineering and Convergence Engineering for Intelligent Drone, Sejong University, Seoul, 05006, Republic of Korea

* Corresponding Author: Hyeonjoon Moon. Email: email

(This article belongs to the Special Issue: New Trends in Image Processing)

Computers, Materials & Continua 2026, 86(1), 1-23. https://doi.org/10.32604/cmc.2025.070667

Abstract

Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics. While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information, existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors. In order to address these challenges and maximize the performance of brain tumor segmentation, this research introduces a novel SwinUNETR-based model by integrating a new decoder block, the Hierarchical Channel-wise Attention Decoder (HCAD), into a powerful SwinUNETR encoder. The HCAD decoder block utilizes hierarchical features and channel-specific attention mechanisms to further fuse information at different scales transmitted from the encoder and preserve spatial details throughout the reconstruction phase. Rigorous evaluations on the recent BraTS GLI datasets demonstrate that the proposed SwinHCAD model achieved superior and improved segmentation accuracy on both the Dice score and HD95 metrics across all tumor subregions (WT, TC, and ET) compared to baseline models. In particular, the rationale and contribution of the model design were clarified through ablation studies to verify the effectiveness of the proposed HCAD decoder block. The results of this study are expected to greatly contribute to enhancing the efficiency of clinical diagnosis and treatment planning by increasing the precision of automated brain tumor segmentation.

Keywords

Attention mechanism; brain tumor segmentation; channel-wise attention; decoder; deep learning; medical imaging; MRI; transformer; U-Net

Cite This Article

APA Style
Jin, S., Fayaz, M., Dang, L.M., Song, H., Moon, H. (2026). SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention. Computers, Materials & Continua, 86(1), 1–23. https://doi.org/10.32604/cmc.2025.070667
Vancouver Style
Jin S, Fayaz M, Dang LM, Song H, Moon H. SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention. Comput Mater Contin. 2026;86(1):1–23. https://doi.org/10.32604/cmc.2025.070667
IEEE Style
S. Jin, M. Fayaz, L. M. Dang, H. Song, and H. Moon, “SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–23, 2026. https://doi.org/10.32604/cmc.2025.070667



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1112

    View

  • 627

    Download

  • 0

    Like

Share Link