Open Access
ARTICLE
Lane Line Detection Method for Complex Road Scenes Based on DeepLabv3+ and MobilenetV4
College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao, 266590, China
* Corresponding Author: Lihua Wang. Email:
Computers, Materials & Continua 2026, 87(1), 55 https://doi.org/10.32604/cmc.2025.072799
Received 03 September 2025; Accepted 27 November 2025; Issue published 10 February 2026
Abstract
With the continuous development of artificial intelligence and computer vision technology, numerous deep learning-based lane line detection methods have emerged. DeepLabv3+, as a classic semantic segmentation model, has found widespread application in the field of lane line detection. However, the accuracy of lane line segmentation is often compromised by factors such as changes in lighting conditions, occlusions, and wear and tear on the lane lines. Additionally, DeepLabv3+ suffers from high memory consumption and challenges in deployment on embedded platforms. To address these issues, this paper proposes a lane line detection method for complex road scenes based on DeepLabv3+ and MobileNetV4 (MNv4). First, the lightweight MNv4 is adopted as the backbone network, and the standard convolutions in ASPP are replaced with depthwise separable convolutions. Second, a polarization attention mechanism is introduced after the ASPP module to enhance the model’s generalization capability. Finally, the Simple Linear Iterative Clustering (SLIC) superpixel segmentation algorithm is employed to preserve lane line edge information. MNv4-DeepLabv3+ was tested on the TuSimple and CULane datasets. On the TuSimple dataset, the Mean Intersection over Union (MIoU) and Mean Pixel Accuracy (mPA) improved by 1.01% and 7.49%, respectively. On the CULane dataset, MIoU and mPA increased by 3.33% and 7.74%, respectively. The number of parameters decreased from 54.84 to 3.19 M. Experimental results demonstrate that MNv4-DeepLabv3+ significantly optimizes model parameter count and enhances segmentation accuracy.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools