Open Access iconOpen Access



A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning

Khalid M. O. Nahar1, Ammar Almomani2,3,*, Nahlah Shatnawi1, Mohammad Alauthman4

1 Department of Computer Sciences, Faculty of Information Technology and Computer Sciences, Yarmouk University–Irbid, 21163, Jordan
2 School of Computing, Skyline University College, Sharjah, P. O. Box 1797, United Arab Emirates
3 IT-Department-Al-Huson University College, Al-Balqa Applied University, P. O. Box 50, Irbid, Jordan
4 Department of Information Security, Faculty of Information Technology, University of Petra, Amman, Jordan

* Corresponding Authors: Ammar Almomani. Email: email,email

Intelligent Automation & Soft Computing 2023, 37(2), 2037-2057.


This study presents a novel and innovative approach to automatically translating Arabic Sign Language (ATSL) into spoken Arabic. The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models. The image-based translation method maps sign language gestures to corresponding letters or words using distance measures and classification as a machine learning technique. The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs, with a translation accuracy of 93.7%. This research makes a significant contribution to the field of ATSL. It offers a practical solution for improving communication for individuals with special needs, such as the deaf and mute community. This work demonstrates the potential of deep learning techniques in translating sign language into natural language and highlights the importance of ATSL in facilitating communication for individuals with disabilities.


Cite This Article

K. M. O. Nahar, A. Almomani, N. Shatnawi and M. Alauthman, "A robust model for translating arabic sign language into spoken arabic using deep learning," Intelligent Automation & Soft Computing, vol. 37, no.2, pp. 2037–2057, 2023.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 600


  • 426


  • 0


Share Link