Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (24)
  • Open Access

    ARTICLE

    Fusing Geometric and Temporal Deep Features for High-Precision Arabic Sign Language Recognition

    Yazeed Alkhrijah1,2, Shehzad Khalid3, Syed Muhammad Usman4,*, Amina Jameel3, Danish Hamid5

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.1, pp. 1113-1141, 2025, DOI:10.32604/cmes.2025.068726 - 31 July 2025

    Abstract Arabic Sign Language (ArSL) recognition plays a vital role in enhancing the communication for the Deaf and Hard of Hearing (DHH) community. Researchers have proposed multiple methods for automated recognition of ArSL; however, these methods face multiple challenges that include high gesture variability, occlusions, limited signer diversity, and the scarcity of large annotated datasets. Existing methods, often relying solely on either skeletal data or video-based features, struggle with generalization and robustness, especially in dynamic and real-world conditions. This paper proposes a novel multimodal ensemble classification framework that integrates geometric features derived from 3D skeletal joint… More >

  • Open Access

    ARTICLE

    ALCTS—An Assistive Learning and Communicative Tool for Speech and Hearing Impaired Students

    Shabana Ziyad Puthu Vedu1,*, Wafaa A. Ghonaim2, Naglaa M. Mostafa3, Pradeep Kumar Singh4

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 2599-2617, 2025, DOI:10.32604/cmc.2025.062695 - 16 April 2025

    Abstract Hearing and Speech impairment can be congenital or acquired. Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges. However, the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study. Assistive learning devices enable students to access institutional resources and facilities fully. The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates. This tool converts the audio signals into sign language videos for the… More >

  • Open Access

    ARTICLE

    VTAN: A Novel Video Transformer Attention-Based Network for Dynamic Sign Language Recognition

    Ziyang Deng1, Weidong Min1,2,3,*, Qing Han1,2,3, Mengxue Liu1, Longfei Li1

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2793-2812, 2025, DOI:10.32604/cmc.2024.057456 - 17 February 2025

    Abstract Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dynamic sign language requires identifying keyframes that best represent the signs, and missing these keyframes reduces accuracy. Secondly, some methods do not focus enough on hand regions, which are small within the overall frame, leading to information loss. To address these challenges, we propose a novel Video Transformer Attention-based Network (VTAN) for dynamic sign language recognition. Our approach prioritizes informative frames and hand regions effectively. To tackle the first… More >

  • Open Access

    ARTICLE

    Enhancing Communication Accessibility: UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals

    Khushal Das1, Fazeel Abid2, Jawad Rasheed3,4,*, Kamlish5, Tunc Asuroglu6,*, Shtwai Alsubai7, Safeeullah Soomro8

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 689-711, 2024, DOI:10.32604/cmes.2024.051335 - 20 August 2024

    Abstract Deaf people or people facing hearing issues can communicate using sign language (SL), a visual language. Many works based on rich source language have been proposed; however, the work using poor resource language is still lacking. Unlike other SLs, the visuals of the Urdu Language are different. This study presents a novel approach to translating Urdu sign language (UrSL) using the UrSL-CNN model, a convolutional neural network (CNN) architecture specifically designed for this purpose. Unlike existing works that primarily focus on languages with rich resources, this study addresses the challenge of translating a sign language… More >

  • Open Access

    REVIEW

    A Survey on Chinese Sign Language Recognition: From Traditional Methods to Artificial Intelligence

    Xianwei Jiang1, Yanqiong Zhang1,*, Juan Lei1, Yudong Zhang2,3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.1, pp. 1-40, 2024, DOI:10.32604/cmes.2024.047649 - 16 April 2024

    Abstract Research on Chinese Sign Language (CSL) provides convenience and support for individuals with hearing impairments to communicate and integrate into society. This article reviews the relevant literature on Chinese Sign Language Recognition (CSLR) in the past 20 years. Hidden Markov Models (HMM), Support Vector Machines (SVM), and Dynamic Time Warping (DTW) were found to be the most commonly employed technologies among traditional identification methods. Benefiting from the rapid development of computer vision and artificial intelligence technology, Convolutional Neural Networks (CNN), 3D-CNN, YOLO, Capsule Network (CapsNet) and various deep neural networks have sprung up. Deep Neural… More >

  • Open Access

    ARTICLE

    Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification

    Jungpil Shin1,*, Md. Al Mehedi Hasan2, Abu Saleh Musa Miah1, Kota Suzuki1, Koki Hirooka1

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2605-2625, 2024, DOI:10.32604/cmes.2023.046334 - 11 March 2024

    Abstract Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities. In Japan, approximately 360,000 individuals with hearing and speech disabilities rely on Japanese Sign Language (JSL) for communication. However, existing JSL recognition systems have faced significant performance limitations due to inherent complexities. In response to these challenges, we present a novel JSL recognition system that employs a strategic fusion approach, combining joint skeleton-based handcrafted features and pixel-based deep learning features. Our system incorporates two distinct streams: the first stream extracts crucial handcrafted features, emphasizing the capture of hand and body… More >

  • Open Access

    REVIEW

    Recent Advances on Deep Learning for Sign Language Recognition

    Yanqiong Zhang, Xianwei Jiang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2399-2450, 2024, DOI:10.32604/cmes.2023.045731 - 11 March 2024

    Abstract Sign language, a visual-gestural language used by the deaf and hard-of-hearing community, plays a crucial role in facilitating communication and promoting inclusivity. Sign language recognition (SLR), the process of automatically recognizing and interpreting sign language gestures, has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world. The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR. This paper presents a comprehensive and up-to-date analysis of the advancements, challenges, and opportunities in deep learning-based sign… More >

  • Open Access

    ARTICLE

    Deep Learning Approach for Hand Gesture Recognition: Applications in Deaf Communication and Healthcare

    Khursheed Aurangzeb1, Khalid Javeed2, Musaed Alhussein1, Imad Rida3, Syed Irtaza Haider1, Anubha Parashar4,*

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 127-144, 2024, DOI:10.32604/cmc.2023.042886 - 30 January 2024

    Abstract Hand gestures have been used as a significant mode of communication since the advent of human civilization. By facilitating human-computer interaction (HCI), hand gesture recognition (HGRoc) technology is crucial for seamless and error-free HCI. HGRoc technology is pivotal in healthcare and communication for the deaf community. Despite significant advancements in computer vision-based gesture recognition for language understanding, two considerable challenges persist in this field: (a) limited and common gestures are considered, (b) processing multiple channels of information across a network takes huge computational time during discriminative feature extraction. Therefore, a novel hand vision-based convolutional neural network… More >

  • Open Access

    ARTICLE

    Alphabet-Level Indian Sign Language Translation to Text Using Hybrid-AO Thresholding with CNN

    Seema Sabharwal1,2,*, Priti Singla1

    Intelligent Automation & Soft Computing, Vol.37, No.3, pp. 2567-2582, 2023, DOI:10.32604/iasc.2023.035497 - 11 September 2023

    Abstract Sign language is used as a communication medium in the field of trade, defence, and in deaf-mute communities worldwide. Over the last few decades, research in the domain of translation of sign language has grown and become more challenging. This necessitates the development of a Sign Language Translation System (SLTS) to provide effective communication in different research domains. In this paper, novel Hybrid Adaptive Gaussian Thresholding with Otsu Algorithm (Hybrid-AO) for image segmentation is proposed for the translation of alphabet-level Indian Sign Language (ISLTS) with a 5-layer Convolution Neural Network (CNN). The focus of this… More >

  • Open Access

    ARTICLE

    A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning

    Khalid M. O. Nahar1, Ammar Almomani2,3,*, Nahlah Shatnawi1, Mohammad Alauthman4

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 2037-2057, 2023, DOI:10.32604/iasc.2023.038235 - 21 June 2023

    Abstract This study presents a novel and innovative approach to automatically translating Arabic Sign Language (ATSL) into spoken Arabic. The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models. The image-based translation method maps sign language gestures to corresponding letters or words using distance measures and classification as a machine learning technique. The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs, with a translation accuracy of 93.7%. This research makes a significant contribution to the More >

Displaying 1-10 on page 1 of 24. Per Page