Open Access
ARTICLE
Towards a Real-Time Indoor Object Detection for Visually Impaired Users Using Raspberry Pi 4 and YOLOv11: A Feasibility Study
1 Department of Computer Science, College of Computer Science and Engineering, Taibah University, Madinah, 42353, Saudi Arabia
2 King Salman Center for Disability Research, Riyadh, 11614, Saudi Arabia
3 Department of Computation and Technology, Federal University of Rio Grande do Norte, Caicó, 59078-900, Brazil
* Corresponding Author: Talal H. Noor. Email:
Computer Modeling in Engineering & Sciences 2025, 144(3), 3085-3111. https://doi.org/10.32604/cmes.2025.068393
Received 28 May 2025; Accepted 01 September 2025; Issue published 30 September 2025
Abstract
People with visual impairments face substantial navigation difficulties in residential and unfamiliar indoor spaces. Neither canes nor verbal navigation systems possess adequate features to deliver real-time spatial awareness to users. This research work represents a feasibility study for the wearable IoT-based indoor object detection assistant system architecture that employs a real-time indoor object detection approach to help visually impaired users recognize indoor objects. The system architecture includes four main layers: Wearable Internet of Things (IoT), Network, Cloud, and Indoor Object Detection Layers. The wearable hardware prototype is assembled using a Raspberry Pi 4, while the indoor object detection approach exploits YOLOv11. YOLOv11 represents the cutting edge of deep learning models optimized for both speed and accuracy in recognizing objects and powers the research prototype. In this work, we used a prototype implementation, comparative experiments, and two datasets compiled from Furniture Detection (i.e., from Roboflow Universe) and Kaggle, which comprises 3000 images evenly distributed across three object categories, including bed, sofa, and table. In the evaluation process, the Raspberry Pi is only used for a feasibility demonstration of real-time inference performance (e.g., latency and memory consumption) on embedded hardware. We also evaluated YOLOv11 by comparing its performance with other current methodologies, which involved a Convolutional Neural Network (CNN) (MobileNet- Single Shot MultiBox Detector (SSD)) model together with the RT-DETR Vision Transformer. The experimental results show that YOLOv11 stands out by reaching an average of 99.07%, 98.51%, 97.96%, and 98.22% for the accuracy, precision, recall, and F1-score, respectively. This feasibility study highlights the effectiveness of Raspberry Pi 4 and YOLOv11 in real-time indoor object detection, paving the way for structured user studies with visually impaired people in the future to evaluate their real-world use and impact.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools