Open Access iconOpen Access

ARTICLE

Research on Vehicle Safety Based on Multi-Sensor Feature Fusion for Autonomous Driving Task

Yang Su1,*, Xianrang Shi1, Tinglun Song2

1 College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
2 Institute of Advanced Technology, Chery Automobile Co., Ltd., Wuhu, 241009, China

* Corresponding Author: Yang Su. Email: email

(This article belongs to the Special Issue: Research on Deep Learning-based Object Detection and Its Derivative Key Technologies)

Computers, Materials & Continua 2025, 83(3), 5831-5848. https://doi.org/10.32604/cmc.2025.064036

Abstract

Ensuring that autonomous vehicles maintain high precision and rapid response capabilities in complex and dynamic driving environments is a critical challenge in the field of autonomous driving. This study aims to enhance the learning efficiency of multi-sensor feature fusion in autonomous driving tasks, thereby improving the safety and responsiveness of the system. To achieve this goal, we propose an innovative multi-sensor feature fusion model that integrates three distinct modalities: visual, radar, and lidar data. The model optimizes the feature fusion process through the introduction of two novel mechanisms: Sparse Channel Pooling (SCP) and Residual Triplet-Attention (RTA). Firstly, the SCP mechanism enables the model to adaptively filter out salient feature channels while eliminating the interference of redundant features. This enhances the model’s emphasis on critical features essential for decision-making and strengthens its robustness to environmental variability. Secondly, the RTA mechanism addresses the issue of feature misalignment across different modalities by effectively aligning key cross-modal features. This alignment reduces the computational overhead associated with redundant features and enhances the overall efficiency of the system. Furthermore, this study incorporates a reinforcement learning module designed to optimize strategies within a continuous action space. By integrating this module with the feature fusion learning process, the entire system is capable of learning efficient driving strategies in an end-to-end manner within the CARLA autonomous driving simulator. Experimental results demonstrate that the proposed model significantly enhances the perception and decision-making accuracy of the autonomous driving system in complex traffic scenarios while maintaining real-time responsiveness. This work provides a novel perspective and technical pathway for the application of multi-sensor data fusion in autonomous driving.

Keywords

Multi-sensor fusion; autonomous driving; feature selection; attention mechanism; reinforcement learning

Cite This Article

APA Style
Su, Y., Shi, X., Song, T. (2025). Research on Vehicle Safety Based on Multi-Sensor Feature Fusion for Autonomous Driving Task. Computers, Materials & Continua, 83(3), 5831–5848. https://doi.org/10.32604/cmc.2025.064036
Vancouver Style
Su Y, Shi X, Song T. Research on Vehicle Safety Based on Multi-Sensor Feature Fusion for Autonomous Driving Task. Comput Mater Contin. 2025;83(3):5831–5848. https://doi.org/10.32604/cmc.2025.064036
IEEE Style
Y. Su, X. Shi, and T. Song, “Research on Vehicle Safety Based on Multi-Sensor Feature Fusion for Autonomous Driving Task,” Comput. Mater. Contin., vol. 83, no. 3, pp. 5831–5848, 2025. https://doi.org/10.32604/cmc.2025.064036



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 854

    View

  • 475

    Download

  • 0

    Like

Share Link