Occlusion remains a key challenge in computer vision, particularly for autonomous driving and robotics, where it degrades 2D/3D detection accuracy. This paper reviews occlusion-handling methods across sensor modalities (stereo, Time-of-Flight (ToF), LiDAR) and introduces FuDensityNet, an RGB-LiDAR fusion framework for improved occluded object detection. Future work explores monocular depth estimation to reduce dependency on expensive 3D sensors.
The cover image was created with AI-generated content via Canva, and it contains no copyrighted elements or misleading representations.
View this paper