Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.074016
Special Issues
Table of Content

Open Access

ARTICLE

Korean Sign Language Recognition and Sentence Generation through Data Augmentation

Soo-Yeon Jeong1, Ho-Yeon Jeong2, Sun-Young Ihm3,*
1 Division of Software Engineering, Pai Chai University, Daejeon, Republic of Korea
2 Department of Artificial Intelligence, Kyung Hee University, Yongin, Republic of Korea
3 Department of Computer Engineering, Pai Chai University, Daejeon, Republic of Korea
* Corresponding Author: Sun-Young Ihm. Email: email
(This article belongs to the Special Issue: Additive Manufacturing: Advances in Computational Modeling and Simulation)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.074016

Received 30 September 2025; Accepted 15 January 2026; Published online 18 February 2026

Abstract

Sign language is a primary mode of communication for individuals with hearing impairments, conveying meaning through hand shapes and hand movements. Contrary to spoken or written languages, sign language relies on the recognition and interpretation of hand gestures captured in video data. However, sign language datasets remain relatively limited compared to those of other languages, which hinders the training and performance of deep learning models. Additionally, the distinct word order of sign language, unlike that of spoken language, requires context-aware and natural sentence generation. To address these challenges, this study applies data augmentation techniques to build a Korean Sign Language dataset and train recognition models. Recognized words are then reconstructed into complete sentences. The sign recognition process uses OpenCV and MediaPipe to extract hand landmarks from sign language videos and analyzes hand position, orientation, and motion. The extracted features are converted into time-series data and fed into a Long Short-Term Memory (LSTM) model. The proposed recognition framework achieved an accuracy of up to 81.25%, while the sentence generation achieved an accuracy of up to 95%. The proposed approach is expected to be applicable not only to Korean Sign Language but also to other low-resource sign languages for recognition and translation tasks.

Keywords

Korean sign language recognition; LSTM; data augmentation; sentence completion
  • 155

    View

  • 74

    Download

  • 0

    Like

Share Link