Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    REVIEW

    A Survey on Chinese Sign Language Recognition: From Traditional Methods to Artificial Intelligence

    Xianwei Jiang1, Yanqiong Zhang1,*, Juan Lei1, Yudong Zhang2,3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.1, pp. 1-40, 2024, DOI:10.32604/cmes.2024.047649

    Abstract Research on Chinese Sign Language (CSL) provides convenience and support for individuals with hearing impairments to communicate and integrate into society. This article reviews the relevant literature on Chinese Sign Language Recognition (CSLR) in the past 20 years. Hidden Markov Models (HMM), Support Vector Machines (SVM), and Dynamic Time Warping (DTW) were found to be the most commonly employed technologies among traditional identification methods. Benefiting from the rapid development of computer vision and artificial intelligence technology, Convolutional Neural Networks (CNN), 3D-CNN, YOLO, Capsule Network (CapsNet) and various deep neural networks have sprung up. Deep Neural Networks (DNNs) and their derived… More >

  • Open Access

    ARTICLE

    Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification

    Jungpil Shin1,*, Md. Al Mehedi Hasan2, Abu Saleh Musa Miah1, Kota Suzuki1, Koki Hirooka1

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2605-2625, 2024, DOI:10.32604/cmes.2023.046334

    Abstract Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities. In Japan, approximately 360,000 individuals with hearing and speech disabilities rely on Japanese Sign Language (JSL) for communication. However, existing JSL recognition systems have faced significant performance limitations due to inherent complexities. In response to these challenges, we present a novel JSL recognition system that employs a strategic fusion approach, combining joint skeleton-based handcrafted features and pixel-based deep learning features. Our system incorporates two distinct streams: the first stream extracts crucial handcrafted features, emphasizing the capture of hand and body movements within JSL gestures. Simultaneously,… More >

  • Open Access

    REVIEW

    Recent Advances on Deep Learning for Sign Language Recognition

    Yanqiong Zhang, Xianwei Jiang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2399-2450, 2024, DOI:10.32604/cmes.2023.045731

    Abstract Sign language, a visual-gestural language used by the deaf and hard-of-hearing community, plays a crucial role in facilitating communication and promoting inclusivity. Sign language recognition (SLR), the process of automatically recognizing and interpreting sign language gestures, has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world. The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR. This paper presents a comprehensive and up-to-date analysis of the advancements, challenges, and opportunities in deep learning-based sign language recognition, focusing on the… More >

  • Open Access

    ARTICLE

    Deep Learning Approach for Hand Gesture Recognition: Applications in Deaf Communication and Healthcare

    Khursheed Aurangzeb1, Khalid Javeed2, Musaed Alhussein1, Imad Rida3, Syed Irtaza Haider1, Anubha Parashar4,*

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 127-144, 2024, DOI:10.32604/cmc.2023.042886

    Abstract Hand gestures have been used as a significant mode of communication since the advent of human civilization. By facilitating human-computer interaction (HCI), hand gesture recognition (HGRoc) technology is crucial for seamless and error-free HCI. HGRoc technology is pivotal in healthcare and communication for the deaf community. Despite significant advancements in computer vision-based gesture recognition for language understanding, two considerable challenges persist in this field: (a) limited and common gestures are considered, (b) processing multiple channels of information across a network takes huge computational time during discriminative feature extraction. Therefore, a novel hand vision-based convolutional neural network (CNN) model named (HVCNNM)… More >

  • Open Access

    ARTICLE

    Arabic Sign Language Gesture Classification Using Deer Hunting Optimization with Machine Learning Model

    Badriyya B. Al-onazi1, Mohamed K. Nour2, Hussain Alshahran3, Mohamed Ahmed Elfaki3, Mrim M. Alnfiai4, Radwa Marzouk5, Mahmoud Othman6, Mahir M. Sharif7, Abdelwahed Motwakel8,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3413-3429, 2023, DOI:10.32604/cmc.2023.035303

    Abstract Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities. Several models have been available in the literature for sign language detection and classification for enhanced outcomes. But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks. This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning (ASLGC-DHOML) model. The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures. The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected… More >

  • Open Access

    ARTICLE

    Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People

    Mrim M. Alnfiai*

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1653-1669, 2023, DOI:10.32604/iasc.2023.033577

    Abstract Sign language is mainly utilized in communication with people who have hearing disabilities. Sign language is used to communicate with people having developmental impairments who have some or no interaction skills. The interaction via Sign language becomes a fruitful means of communication for hearing and speech impaired persons. A Hand gesture recognition system finds helpful for deaf and dumb people by making use of human computer interface (HCI) and convolutional neural networks (CNN) for identifying the static indications of Indian Sign Language (ISL). This study introduces a shark smell optimization with deep learning based automated sign language recognition (SSODL-ASLR) model… More >

  • Open Access

    ARTICLE

    A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition

    Sameena Javaid*, Safdar Rizvi

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 523-537, 2023, DOI:10.32604/cmc.2023.031924

    Abstract Sign language fills the communication gap for people with hearing and speaking ailments. It includes both visual modalities, manual gestures consisting of movements of hands, and non-manual gestures incorporating body movements including head, facial expressions, eyes, shoulder shrugging, etc. Previously both gestures have been detected; identifying separately may have better accuracy, but much communicational information is lost. A proper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others. Our novel proposed system contributes as Sign Language Action Transformer Network (SLATN), localizing hand, body, and facial gestures in video sequences. Here… More >

  • Open Access

    ARTICLE

    Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network

    Qi Guo, Shujun Zhang*, Hui Li

    CMES-Computer Modeling in Engineering & Sciences, Vol.134, No.3, pp. 1653-1670, 2023, DOI:10.32604/cmes.2022.021784

    Abstract Continuous sign language recognition (CSLR) is challenging due to the complexity of video background, hand gesture variability, and temporal modeling difficulties. This work proposes a CSLR method based on a spatial-temporal graph attention network to focus on essential features of video series. The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatial-temporal graph to reflect inter-frame relevance and physical connections between nodes. The graph-based multi-head attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration, and short-term motion correlation modeling is completed via a temporal… More > Graphic Abstract

    Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network

  • Open Access

    ARTICLE

    Sign Language Recognition and Classification Model to Enhance Quality of Disabled People

    Fadwa Alrowais1, Saud S. Alotaibi2, Sami Dhahbi3,4, Radwa Marzouk5, Abdullah Mohamed6, Anwer Mustafa Hilal7,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 3419-3432, 2022, DOI:10.32604/cmc.2022.029438

    Abstract Sign language recognition can be considered as an effective solution for disabled people to communicate with others. It helps them in conveying the intended information using sign languages without any challenges. Recent advancements in computer vision and image processing techniques can be leveraged to detect and classify the signs used by disabled people in an effective manner. Metaheuristic optimization algorithms can be designed in a manner such that it fine tunes the hyper parameters, used in Deep Learning (DL) models as the latter considerably impacts the classification results. With this motivation, the current study designs the Optimal Deep Transfer Learning… More >

  • Open Access

    ARTICLE

    Intelligent Sign Language Recognition System for E-Learning Context

    Muhammad Jamil Hussain1, Ahmad Shaoor1, Suliman A. Alsuhibany2, Yazeed Yasin Ghadi3, Tamara al Shloul4, Ahmad Jalal1, Jeongmin Park5,*

    CMC-Computers, Materials & Continua, Vol.72, No.3, pp. 5327-5343, 2022, DOI:10.32604/cmc.2022.025953

    Abstract In this research work, an efficient sign language recognition tool for e-learning has been proposed with a new type of feature set based on angle and lines. This feature set has the ability to increase the overall performance of machine learning algorithms in an efficient way. The hand gesture recognition based on these features has been implemented for usage in real-time. The feature set used hand landmarks, which were generated using media-pipe (MediaPipe) and open computer vision (openCV) on each frame of the incoming video. The overall algorithm has been tested on two well-known ASL-alphabet (American Sign Language) and ISL-HS… More >

Displaying 1-10 on page 1 of 10. Per Page