Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (17)
  • Open Access

    ARTICLE

    Deep Learning Approach for Hand Gesture Recognition: Applications in Deaf Communication and Healthcare

    Khursheed Aurangzeb1, Khalid Javeed2, Musaed Alhussein1, Imad Rida3, Syed Irtaza Haider1, Anubha Parashar4,*

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 127-144, 2024, DOI:10.32604/cmc.2023.042886

    Abstract Hand gestures have been used as a significant mode of communication since the advent of human civilization. By facilitating human-computer interaction (HCI), hand gesture recognition (HGRoc) technology is crucial for seamless and error-free HCI. HGRoc technology is pivotal in healthcare and communication for the deaf community. Despite significant advancements in computer vision-based gesture recognition for language understanding, two considerable challenges persist in this field: (a) limited and common gestures are considered, (b) processing multiple channels of information across a network takes huge computational time during discriminative feature extraction. Therefore, a novel hand vision-based convolutional neural network (CNN) model named (HVCNNM)… More >

  • Open Access

    ARTICLE

    Alphabet-Level Indian Sign Language Translation to Text Using Hybrid-AO Thresholding with CNN

    Seema Sabharwal1,2,*, Priti Singla1

    Intelligent Automation & Soft Computing, Vol.37, No.3, pp. 2567-2582, 2023, DOI:10.32604/iasc.2023.035497

    Abstract Sign language is used as a communication medium in the field of trade, defence, and in deaf-mute communities worldwide. Over the last few decades, research in the domain of translation of sign language has grown and become more challenging. This necessitates the development of a Sign Language Translation System (SLTS) to provide effective communication in different research domains. In this paper, novel Hybrid Adaptive Gaussian Thresholding with Otsu Algorithm (Hybrid-AO) for image segmentation is proposed for the translation of alphabet-level Indian Sign Language (ISLTS) with a 5-layer Convolution Neural Network (CNN). The focus of this paper is to analyze various… More >

  • Open Access

    ARTICLE

    A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning

    Khalid M. O. Nahar1, Ammar Almomani2,3,*, Nahlah Shatnawi1, Mohammad Alauthman4

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 2037-2057, 2023, DOI:10.32604/iasc.2023.038235

    Abstract This study presents a novel and innovative approach to automatically translating Arabic Sign Language (ATSL) into spoken Arabic. The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models. The image-based translation method maps sign language gestures to corresponding letters or words using distance measures and classification as a machine learning technique. The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs, with a translation accuracy of 93.7%. This research makes a significant contribution to the field of ATSL. It offers… More >

  • Open Access

    ARTICLE

    An Efficient and Robust Hand Gesture Recognition System of Sign Language Employing Finetuned Inception-V3 and Efficientnet-B0 Network

    Adnan Hussain1, Sareer Ul Amin2, Muhammad Fayaz3, Sanghyun Seo4,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3509-3525, 2023, DOI:10.32604/csse.2023.037258

    Abstract Hand Gesture Recognition (HGR) is a promising research area with an extensive range of applications, such as surgery, video game techniques, and sign language translation, where sign language is a complicated structured form of hand gestures. The fundamental building blocks of structured expressions in sign language are the arrangement of the fingers, the orientation of the hand, and the hand’s position concerning the body. The importance of HGR has increased due to the increasing number of touchless applications and the rapid growth of the hearing-impaired population. Therefore, real-time HGR is one of the most effective interaction methods between computers and… More >

  • Open Access

    ARTICLE

    Arabic Sign Language Gesture Classification Using Deer Hunting Optimization with Machine Learning Model

    Badriyya B. Al-onazi1, Mohamed K. Nour2, Hussain Alshahran3, Mohamed Ahmed Elfaki3, Mrim M. Alnfiai4, Radwa Marzouk5, Mahmoud Othman6, Mahir M. Sharif7, Abdelwahed Motwakel8,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3413-3429, 2023, DOI:10.32604/cmc.2023.035303

    Abstract Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities. Several models have been available in the literature for sign language detection and classification for enhanced outcomes. But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks. This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning (ASLGC-DHOML) model. The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures. The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected… More >

  • Open Access

    ARTICLE

    Deep Learning-Based Sign Language Recognition for Hearing and Speaking Impaired People

    Mrim M. Alnfiai*

    Intelligent Automation & Soft Computing, Vol.36, No.2, pp. 1653-1669, 2023, DOI:10.32604/iasc.2023.033577

    Abstract Sign language is mainly utilized in communication with people who have hearing disabilities. Sign language is used to communicate with people having developmental impairments who have some or no interaction skills. The interaction via Sign language becomes a fruitful means of communication for hearing and speech impaired persons. A Hand gesture recognition system finds helpful for deaf and dumb people by making use of human computer interface (HCI) and convolutional neural networks (CNN) for identifying the static indications of Indian Sign Language (ISL). This study introduces a shark smell optimization with deep learning based automated sign language recognition (SSODL-ASLR) model… More >

  • Open Access

    ARTICLE

    A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition

    Sameena Javaid*, Safdar Rizvi

    CMC-Computers, Materials & Continua, Vol.74, No.1, pp. 523-537, 2023, DOI:10.32604/cmc.2023.031924

    Abstract Sign language fills the communication gap for people with hearing and speaking ailments. It includes both visual modalities, manual gestures consisting of movements of hands, and non-manual gestures incorporating body movements including head, facial expressions, eyes, shoulder shrugging, etc. Previously both gestures have been detected; identifying separately may have better accuracy, but much communicational information is lost. A proper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others. Our novel proposed system contributes as Sign Language Action Transformer Network (SLATN), localizing hand, body, and facial gestures in video sequences. Here… More >

  • Open Access

    ARTICLE

    Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network

    Qi Guo, Shujun Zhang*, Hui Li

    CMES-Computer Modeling in Engineering & Sciences, Vol.134, No.3, pp. 1653-1670, 2023, DOI:10.32604/cmes.2022.021784

    Abstract Continuous sign language recognition (CSLR) is challenging due to the complexity of video background, hand gesture variability, and temporal modeling difficulties. This work proposes a CSLR method based on a spatial-temporal graph attention network to focus on essential features of video series. The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatial-temporal graph to reflect inter-frame relevance and physical connections between nodes. The graph-based multi-head attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration, and short-term motion correlation modeling is completed via a temporal… More > Graphic Abstract

    Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network

  • Open Access

    ARTICLE

    A Light-Weight Deep Learning-Based Architecture for Sign Language Classification

    M. Daniel Nareshkumar1,*, B. Jaison2

    Intelligent Automation & Soft Computing, Vol.35, No.3, pp. 3501-3515, 2023, DOI:10.32604/iasc.2023.027848

    Abstract With advancements in computing powers and the overall quality of images captured on everyday cameras, a much wider range of possibilities has opened in various scenarios. This fact has several implications for deaf and dumb people as they have a chance to communicate with a greater number of people much easier. More than ever before, there is a plethora of info about sign language usage in the real world. Sign languages, and by extension the datasets available, are of two forms, isolated sign language and continuous sign language. The main difference between the two types is that in isolated sign… More >

  • Open Access

    ARTICLE

    ASL Recognition by the Layered Learning Model Using Clustered Groups

    Jungsoo Shin, Jaehee Jung*

    Computer Systems Science and Engineering, Vol.45, No.1, pp. 51-68, 2023, DOI:10.32604/csse.2023.030647

    Abstract American Sign Language (ASL) images can be used as a communication tool by determining numbers and letters using the shape of the fingers. Particularly, ASL can have an key role in communication for hearing-impaired persons and conveying information to other persons, because sign language is their only channel of expression. Representative ASL recognition methods primarily adopt images, sensors, and pose-based recognition techniques, and employ various gestures together with hand-shapes. This study briefly reviews these attempts at ASL recognition and provides an improved ASL classification model that attempts to develop a deep learning method with meta-layers. In the proposed model, the… More >

Displaying 1-10 on page 1 of 17. Per Page