TY - EJOU AU - Al-Wesabi, Fahd N. AU - Almansour, Hamad AU - Iskandar, Huda G. AU - Yaseen, Ishfaq TI - Enhanced Fire Detection System for Blind and Visually Challenged People Using Artificial Intelligence with Deep Convolutional Neural Networks T2 - Computers, Materials \& Continua PY - 2025 VL - 85 IS - 3 SN - 1546-2226 AB - Earlier notification and fire detection methods provide safety information and fire prevention to blind and visually impaired (BVI) individuals in a limited timeframe in the event of emergencies, particularly in enclosed areas. Fire detection becomes crucial as it directly impacts human safety and the environment. While modern technology requires precise techniques for early detection to prevent damage and loss, few research has focused on artificial intelligence (AI)-based early fire alert systems for BVI individuals in indoor settings. To prevent such fire incidents, it is crucial to identify fires accurately and promptly, and alert BVI personnel using a combination of smart glasses, deep learning (DL), and computer vision (CV). The most recent technologies require effective methods to identify fires quickly, preventing damage and physical loss. In this manuscript, an Enhanced Fire Detection System for Blind and Visually Challenged People using Artificial Intelligence with Deep Convolutional Neural Networks (EFDBVC-AIDCNN) model is presented. The EFDBVC-AIDCNN model presents an advanced fire detection system that utilizes AI to detect and classify fire hazards for BVI people effectively. Initially, image pre-processing is performed using the Gabor filter (GF) model to improve texture details and patterns specific to flames and smoke. For the feature extractor, the Swin transformer (ST) model captures fine details across multiple scales to represent fire patterns accurately. Furthermore, the Elman neural network (ENN) technique is implemented to detect fire. The improved whale optimization algorithm (IWOA) is used to efficiently tune ENN parameters, improving accuracy and robustness across varying lighting and environmental conditions to optimize performance. An extensive experimental study of the EFDBVC-AIDCNN technique is accomplished under the fire detection dataset. A short comparative analysis of the EFDBVC-AIDCNN approach portrayed a superior accuracy value of 96.60% over existing models. KW - Fire detection; swin transformer; visually challenged people; artificial intelligence; computer vision; image pre-processing DO - 10.32604/cmc.2025.067571