Home / Journals / CMES / Online First / doi:10.32604/cmes.2025.075909
Special Issues
Table of Content

Open Access

ARTICLE

A Robust Vision-Based Framework for Traffic Sign and Light Detection in Automated Driving Systems

Mohammed Al-Mahbashi1,2,*, Ali Ahmed3, Abdolraheem Khader4,*, Shakeel Ahmad3, Mohamed A. Damos5, Ahmed Abdu6
1 School of Electronic and Control Engineering, Chang’an University, Xi’an, 710064, China
2 Department of Mechatronics Engineering, Faculty of Engineering, Sana’a University, Sana’a, 11311, Yemen
3 Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
4 School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
5 School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu, 610054, China
6 School of Information, Xi’an University of Finance and Economics, Xi’an, 710100, China
* Corresponding Author: Mohammed Al-Mahbashi. Email: email; Abdolraheem Khader. Email: email
(This article belongs to the Special Issue: Machine Learning and Deep Learning-Based Pattern Recognition)

Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2025.075909

Received 11 November 2025; Accepted 25 December 2025; Published online 05 January 2026

Abstract

Reliable detection of traffic signs and lights (TSLs) at long range and under varying illumination is essential for improving the perception and safety of autonomous driving systems (ADS). Traditional object detection models often exhibit significant performance degradation in real-world environments characterized by high dynamic range and complex lighting conditions. To overcome these limitations, this research presents FED-YOLOv10s, an improved and lightweight object detection framework based on You Only look Once v10 (YOLOv10). The proposed model integrates a C2f-Faster block derived from FasterNet to reduce parameters and floating-point operations, an Efficient Multiscale Attention (EMA) mechanism to improve TSL-invariant feature extraction, and a deformable Convolution Networks v4 (DCNv4) module to enhance multiscale spatial adaptability. Experimental findings demonstrate that the proposed architecture achieves an optimal balance between computational efficiency and detection accuracy, attaining an F1-score of 91.8%, and mAP@0.5 of 95.1%, while reducing parameters to 8.13 million. Comparative analyses across multiple traffic sign detection benchmarks demonstrate that FED-YOLOv10s outperforms state-of-the-art models in precision, recall, and mAP. These results highlight FED-YOLOv10s as a robust, efficient, and deployable solution for intelligent traffic perception in ADS.

Keywords

Automated driving systems; traffic sign and light recognition; YOLO; EMA; DCNv4
  • 97

    View

  • 21

    Download

  • 1

    Like

Share Link