Open Access iconOpen Access

ARTICLE

Enhanced BEV Scene Segmentation: De-Noise Channel Attention for Resource-Constrained Environments

Argho Dey1, Yunfei Yin1,2,*, Zheng Yuan1, Zhiwen Zeng1, Xianjian Bao3, Md Minhazul Islam1

1 College of Computer Science, Chongqing University, Chongqing, 400044, China
2 SUGON Industrial Control and Security Center, Chengdu, 610225, China
3 Department of Computer Science, Maharishi University of Management, Fairfield, IA 52557, USA

* Corresponding Author: Yunfei Yin. Email: email

Computers, Materials & Continua 2026, 87(1), 90 https://doi.org/10.32604/cmc.2025.074122

Abstract

Autonomous vehicles rely heavily on accurate and efficient scene segmentation for safe navigation and efficient operations. Traditional Bird’s Eye View (BEV) methods on semantic scene segmentation, which leverage multimodal sensor fusion, often struggle with noisy data and demand high-performance GPUs, leading to sensor misalignment and performance degradation. This paper introduces an Enhanced Channel Attention BEV (ECABEV), a novel approach designed to address the challenges under insufficient GPU memory conditions. ECABEV integrates camera and radar data through a de-noise enhanced channel attention mechanism, which utilizes global average and max pooling to effectively filter out noise while preserving discriminative features. Furthermore, an improved fusion approach is proposed to efficiently merge categorical data across modalities. To reduce computational overhead, a bilinear interpolation layer normalization method is devised to ensure spatial feature fidelity. Moreover, a scalable cross-entropy loss function is further designed to handle the imbalanced classes with less computational efficiency sacrifice. Extensive experiments on the nuScenes dataset demonstrate that ECABEV achieves state-of-the-art performance with an IoU of 39.961, using a lightweight ViT-B/14 backbone and lower resolution (224 × 224). Our approach highlights its cost-effectiveness and practical applicability, even on low-end devices. The code is publicly available at: https://github.com/YYF-CQU/ECABEV.git.

Keywords

Autonomous vehicle; BEV; attention mechanism; sensor fusion; scene segmentation

Cite This Article

APA Style
Dey, A., Yin, Y., Yuan, Z., Zeng, Z., Bao, X. et al. (2026). Enhanced BEV Scene Segmentation: De-Noise Channel Attention for Resource-Constrained Environments. Computers, Materials & Continua, 87(1), 90. https://doi.org/10.32604/cmc.2025.074122
Vancouver Style
Dey A, Yin Y, Yuan Z, Zeng Z, Bao X, Islam MM. Enhanced BEV Scene Segmentation: De-Noise Channel Attention for Resource-Constrained Environments. Comput Mater Contin. 2026;87(1):90. https://doi.org/10.32604/cmc.2025.074122
IEEE Style
A. Dey, Y. Yin, Z. Yuan, Z. Zeng, X. Bao, and M. M. Islam, “Enhanced BEV Scene Segmentation: De-Noise Channel Attention for Resource-Constrained Environments,” Comput. Mater. Contin., vol. 87, no. 1, pp. 90, 2026. https://doi.org/10.32604/cmc.2025.074122



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 310

    View

  • 41

    Download

  • 0

    Like

Share Link