Special Issues
Table of Content

Vision, LiDAR, and Sensor Fusion-Based SLAM for Autonomous Navigation

Submission Deadline: 30 October 2026 View: 95 Submit to Special Issue

Guest Editors

Prof. Batyrkhan Omarov

Email: batyahan@gmail.com

Affiliation: Department of Mathematical and Computer Modeling, International Information Technology University, Almaty, Kazakhstan

Homepage:

Research Interests: simultaneous localization and mapping (SLAM), autonomous navigation, vision-based and LiDAR-based perception, multi-sensor fusion, deep learning for robotic localization and mapping, semantic mapping, and intelligent navigation systems for mobile robots and autonomous vehicles

图片4.png


Dr. Daniyar Sultan

Email: daniyar.sultan@narxoz.kz

Affiliation: School of Digital Technology, Narxoz University, Almaty, Kazakhstan

Homepage:

Research Interests: autonomous mobile robot navigation, sensor fusion–based SLAM, visual and LiDAR perception, probabilistic state estimation, real-time mapping and localization, deep learning for robotic perception, and robust navigation in dynamic and unstructured environments

图片5.png


Prof. Bakhytzhan Kulambayev

Email: b.kulambayev@turan-edu.kz

Affiliation: Department of Radiotehcnology, Turan University, Almaty, Kazakhstan

Homepage:

Research Interests: intelligent autonomous navigation systems, multi-modal SLAM, vision–LiDAR–IMU integration, semantic perception for robotics, deep learning–assisted localization and mapping

图片6.png


Summary

Vision, LiDAR, and multi-sensor fusion-based SLAM have become fundamental technologies for reliable autonomous navigation, enabling robots and vehicles to localize and map complex environments in real time under dynamic and uncertain conditions. With rapid progress in perception, deep learning, and robust sensor fusion, this research area is increasingly important for achieving accurate, scalable, and safety-critical navigation across indoor, outdoor, industrial, and urban scenarios.

This Special Issue aims to present recent advances in SLAM for autonomous navigation, focusing on vision-based, LiDAR-based, and sensor fusion approaches that improve localization accuracy, mapping robustness, and real-time performance. The scope covers novel SLAM architectures, multi-modal perception, deep learning-enhanced front-end and back-end optimization, loop closure, place recognition, semantic mapping, and resilience under challenging conditions such as low light, motion blur, dynamic obstacles, and adverse weather. Contributions addressing benchmarking, deployment on embedded platforms, and validation in real-world robotic and autonomous driving applications are also welcome.

Topics of interest include (but are not limited to):
- Vision-Based SLAM for autonomous robots and vehicles (monocular, stereo, RGB-D)
- LiDAR SLAM and 3D mapping in large-scale outdoor environments
- Multi-Sensor Fusion SLAM (camera–LiDAR–IMU–GNSS integration)
- Deep Learning-Enhanced SLAM (feature extraction, depth estimation, loop closure, place recognition)
- Semantic SLAM and Scene Understanding for navigation and decision-making
- Robust SLAM in Dynamic and Adverse Conditions (moving objects, low-light, fog/rain, motion blur)
- Real-Time SLAM on Edge/Embedded Platforms and efficient deployment for autonomy
- SLAM-Driven Autonomous Navigation and Path Planning in unknown environments


Keywords

SLAM, autonomous navigation, sensor fusion, vision-based SLAM, LiDAR SLAM, visual–inertial SLAM, semantic mapping, mobile robots, autonomous vehicles, real-time localization

Share Link