TY - EJOU AU - Dey, Argho AU - Yin, Yunfei AU - Yuan, Zheng AU - Zeng, Zhiwen AU - Bao, Xianjian AU - Islam, Md Minhazul TI - Enhanced BEV Scene Segmentation: De-Noise Channel Attention for Resource-Constrained Environments T2 - Computers, Materials \& Continua PY - 2026 VL - 87 IS - 1 SN - 1546-2226 AB - Autonomous vehicles rely heavily on accurate and efficient scene segmentation for safe navigation and efficient operations. Traditional Bird’s Eye View (BEV) methods on semantic scene segmentation, which leverage multimodal sensor fusion, often struggle with noisy data and demand high-performance GPUs, leading to sensor misalignment and performance degradation. This paper introduces an Enhanced Channel Attention BEV (ECABEV), a novel approach designed to address the challenges under insufficient GPU memory conditions. ECABEV integrates camera and radar data through a de-noise enhanced channel attention mechanism, which utilizes global average and max pooling to effectively filter out noise while preserving discriminative features. Furthermore, an improved fusion approach is proposed to efficiently merge categorical data across modalities. To reduce computational overhead, a bilinear interpolation layer normalization method is devised to ensure spatial feature fidelity. Moreover, a scalable cross-entropy loss function is further designed to handle the imbalanced classes with less computational efficiency sacrifice. Extensive experiments on the nuScenes dataset demonstrate that ECABEV achieves state-of-the-art performance with an IoU of 39.961, using a lightweight ViT-B/14 backbone and lower resolution (224 × 224). Our approach highlights its cost-effectiveness and practical applicability, even on low-end devices. The code is publicly available at: https://github.com/YYF-CQU/ECABEV.git. KW - Autonomous vehicle; BEV; attention mechanism; sensor fusion; scene segmentation DO - 10.32604/cmc.2025.074122