Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access


    Visual Lip-Reading for Quranic Arabic Alphabets and Words Using Deep Learning

    Nada Faisal Aljohani*, Emad Sami Jaha

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3037-3058, 2023, DOI:10.32604/csse.2023.037113

    Abstract The continuing advances in deep learning have paved the way for several challenging ideas. One such idea is visual lip-reading, which has recently drawn many research interests. Lip-reading, often referred to as visual speech recognition, is the ability to understand and predict spoken speech based solely on lip movements without using sounds. Due to the lack of research studies on visual speech recognition for the Arabic language in general, and its absence in the Quranic research, this research aims to fill this gap. This paper introduces a new publicly available Arabic lip-reading dataset containing 10490 videos captured from multiple viewpoints… More >

  • Open Access


    Deep Learning-Based Approach for Arabic Visual Speech Recognition

    Nadia H. Alsulami1,*, Amani T. Jamal1, Lamiaa A. Elrefaei2

    CMC-Computers, Materials & Continua, Vol.71, No.1, pp. 85-108, 2022, DOI:10.32604/cmc.2022.019450

    Abstract Lip-reading technologies are rapidly progressing following the breakthrough of deep learning. It plays a vital role in its many applications, such as: human-machine communication practices or security applications. In this paper, we propose to develop an effective lip-reading recognition model for Arabic visual speech recognition by implementing deep learning algorithms. The Arabic visual datasets that have been collected contains 2400 records of Arabic digits and 960 records of Arabic phrases from 24 native speakers. The primary purpose is to provide a high-performance model in terms of enhancing the preprocessing phase. Firstly, we extract keyframes from our dataset. Secondly, we produce… More >

  • Open Access


    HLR-Net: A Hybrid Lip-Reading Model Based on Deep Convolutional Neural Networks

    Amany M. Sarhan1, Nada M. Elshennawy1, Dina M. Ibrahim1,2,*

    CMC-Computers, Materials & Continua, Vol.68, No.2, pp. 1531-1549, 2021, DOI:10.32604/cmc.2021.016509


    Lip reading is typically regarded as visually interpreting the speaker’s lip movements during the speaking. This is a task of decoding the text from the speaker’s mouth movement. This paper proposes a lip-reading model that helps deaf people and persons with hearing problems to understand a speaker by capturing a video of the speaker and inputting it into the proposed model to obtain the corresponding subtitles. Using deep learning technologies makes it easier for users to extract a large number of different features, which can then be converted to probabilities of letters to obtain accurate results. Recently proposed methods for… More >

Displaying 1-10 on page 1 of 3. Per Page