Open Access iconOpen Access

ARTICLE

A Robust Vision-Based Framework for Traffic Sign and Light Detection in Automated Driving Systems

Mohammed Al-Mahbashi1,2,*, Ali Ahmed3, Abdolraheem Khader4,*, Shakeel Ahmad3, Mohamed A. Damos5, Ahmed Abdu6

1 School of Electronic and Control Engineering, Chang’an University, Xi’an, 710064, China
2 Department of Mechatronics Engineering, Faculty of Engineering, Sana’a University, Sana’a, 11311, Yemen
3 Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
4 School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
5 School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu, 610054, China
6 School of Information, Xi’an University of Finance and Economics, Xi’an, 710100, China

* Corresponding Authors: Mohammed Al-Mahbashi. Email: email; Abdolraheem Khader. Email: email

(This article belongs to the Special Issue: Machine Learning and Deep Learning-Based Pattern Recognition)

Computer Modeling in Engineering & Sciences 2026, 146(1), 40 https://doi.org/10.32604/cmes.2025.075909

Abstract

Reliable detection of traffic signs and lights (TSLs) at long range and under varying illumination is essential for improving the perception and safety of autonomous driving systems (ADS). Traditional object detection models often exhibit significant performance degradation in real-world environments characterized by high dynamic range and complex lighting conditions. To overcome these limitations, this research presents FED-YOLOv10s, an improved and lightweight object detection framework based on You Only look Once v10 (YOLOv10). The proposed model integrates a C2f-Faster block derived from FasterNet to reduce parameters and floating-point operations, an Efficient Multiscale Attention (EMA) mechanism to improve TSL-invariant feature extraction, and a deformable Convolution Networks v4 (DCNv4) module to enhance multiscale spatial adaptability. Experimental findings demonstrate that the proposed architecture achieves an optimal balance between computational efficiency and detection accuracy, attaining an F1-score of 91.8%, and mAP@0.5 of 95.1%, while reducing parameters to 8.13 million. Comparative analyses across multiple traffic sign detection benchmarks demonstrate that FED-YOLOv10s outperforms state-of-the-art models in precision, recall, and mAP. These results highlight FED-YOLOv10s as a robust, efficient, and deployable solution for intelligent traffic perception in ADS.

Keywords

Automated driving systems; traffic sign and light recognition; YOLO; EMA; DCNv4

Cite This Article

APA Style
Al-Mahbashi, M., Ahmed, A., Khader, A., Ahmad, S., Damos, M.A. et al. (2026). A Robust Vision-Based Framework for Traffic Sign and Light Detection in Automated Driving Systems. Computer Modeling in Engineering & Sciences, 146(1), 40. https://doi.org/10.32604/cmes.2025.075909
Vancouver Style
Al-Mahbashi M, Ahmed A, Khader A, Ahmad S, Damos MA, Abdu A. A Robust Vision-Based Framework for Traffic Sign and Light Detection in Automated Driving Systems. Comput Model Eng Sci. 2026;146(1):40. https://doi.org/10.32604/cmes.2025.075909
IEEE Style
M. Al-Mahbashi, A. Ahmed, A. Khader, S. Ahmad, M. A. Damos, and A. Abdu, “A Robust Vision-Based Framework for Traffic Sign and Light Detection in Automated Driving Systems,” Comput. Model. Eng. Sci., vol. 146, no. 1, pp. 40, 2026. https://doi.org/10.32604/cmes.2025.075909



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 477

    View

  • 112

    Download

  • 0

    Like

Share Link