Open Access
ARTICLE
Toward Efficient Traffic-Sign Detection via SlimNeck and Coordinate-Attention Fusion in YOLO-SMM
1 Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, 50603, Malaysia
2 Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, China
3 School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
4 Mechanical and Industrial Engineering Department, Abu Dhabi University, Zayed City, Abu Dhabi, 59911, United Arab Emirates
* Corresponding Author: Mohammed A. H. Ali. Email:
Computers, Materials & Continua 2026, 86(2), 1-26. https://doi.org/10.32604/cmc.2025.067286
Received 29 April 2025; Accepted 08 July 2025; Issue published 09 December 2025
Abstract
Accurate and real-time traffic-sign detection is a cornerstone of Advanced Driver-Assistance Systems (ADAS) and autonomous vehicles. However, existing one-stage detectors miss distant signs, and two-stage pipelines are impractical for embedded deployment. To address this issue, we present YOLO-SMM, a lightweight two-stage framework. This framework is designed to augment the YOLOv8 baseline with three targeted modules. (1) SlimNeck replaces PAN/FPN with a CSP-OSA/GSConv fusion block, reducing parameters and FLOPs without compromising multi-scale detail. (2) The MCA model introduces row- and column-aware weights to selectively amplify small sign regions in cluttered scenes. (3) MPDIoU augments CIoU loss with a corner-distance term, supplying stable gradients for sub-20-pixel boxes and tightening localization. An evaluation of YOLO-SMM on the German Traffic Sign Recognition Benchmark (GTSRB) revealed that it attained 96.3% mAP50 and 93.1% mAP50-90 at a rate of 90.6 frames per second (FPS). This represents an improvement of +1.0% over previous performance benchmarks. The mAP at 64 × 64 resolution was found to be 50% of the maximum attainable value, with an FPS of +8.3 when compared to YOLOv8. This result indicates superior performance in terms of accuracy and speed compared to YOLOv7, YOLOv5, RetinaNet, EfficientDet, and Faster R-CNN, all of which were operated under equivalent conditions.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools