Open Access
ARTICLE
ELDE-Net: Efficient Light-Weight Depth Estimation Network for Deep Reinforcement Learning-Based Mobile Robot Path Planning
1 Department of Mechatronics, School of Mechanical Engineering, Hanoi University of Science and Technology, Hanoi, 10000, Vietnam
2 College of Engineering, Shibaura Institute of Technology, Tokyo, 135-8548, Japan
* Corresponding Authors: Thai-Viet Dang. Email: ; Phan Xuan Tan. Email:
(This article belongs to the Special Issue: Computer Vision and Image Processing: Feature Selection, Image Enhancement and Recognition)
Computers, Materials & Continua 2025, 85(2), 2651-2680. https://doi.org/10.32604/cmc.2025.067500
Received 05 May 2025; Accepted 23 July 2025; Issue published 23 September 2025
Abstract
Precise and robust three-dimensional object detection (3DOD) presents a promising opportunity in the field of mobile robot (MR) navigation. Monocular 3DOD techniques typically involve extending existing two-dimensional object detection (2DOD) frameworks to predict the three-dimensional bounding box (3DBB) of objects captured in 2D RGB images. However, these methods often require multiple images, making them less feasible for various real-time scenarios. To address these challenges, the emergence of agile convolutional neural networks (CNNs) capable of inferring depth from a single image opens a new avenue for investigation. The paper proposes a novel ELDE-Net network designed to produce cost-effective 3D Bounding Box Estimation (3D-BBE) from a single image. This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder. Additionally, this integration includes a Squeeze-Exploit (SE) module utilizing the Math Kernel Library for Deep Neural Networks (MKLDNN) optimizer to enhance convolutional efficiency and streamline model size during effective training. Meanwhile, the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure. Furthermore, the generated depth maps provide a clear perspective with distance details of objects in the environment. These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes (3DBB), facilitating scene understanding and optimal route planning for mobile robots. Based on the estimated object center of the 3DBB, the Deep Reinforcement Learning (DRL)-based obstacle avoidance strategy for MRs is developed. Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets: NYU-V2, KITTI, and Cityscapes. Overall, this framework shows significant potential for adaptation in intelligent mechatronic systems, particularly in developing knowledge-driven systems for mobile robot navigation.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools