Open Access iconOpen Access

ARTICLE

Korean Sign Language Recognition and Sentence Generation through Data Augmentation

Soo-Yeon Jeong1, Ho-Yeon Jeong2, Sun-Young Ihm3,*

1 Division of Software Engineering, Pai Chai University, Daejeon, Republic of Korea
2 Department of Artificial Intelligence, Kyung Hee University, Yongin, Republic of Korea
3 Department of Computer Engineering, Pai Chai University, Daejeon, Republic of Korea

* Corresponding Author: Sun-Young Ihm. Email: email

(This article belongs to the Special Issue: Additive Manufacturing: Advances in Computational Modeling and Simulation)

Computers, Materials & Continua 2026, 87(2), 87 https://doi.org/10.32604/cmc.2026.074016

Abstract

Sign language is a primary mode of communication for individuals with hearing impairments, conveying meaning through hand shapes and hand movements. Contrary to spoken or written languages, sign language relies on the recognition and interpretation of hand gestures captured in video data. However, sign language datasets remain relatively limited compared to those of other languages, which hinders the training and performance of deep learning models. Additionally, the distinct word order of sign language, unlike that of spoken language, requires context-aware and natural sentence generation. To address these challenges, this study applies data augmentation techniques to build a Korean Sign Language dataset and train recognition models. Recognized words are then reconstructed into complete sentences. The sign recognition process uses OpenCV and MediaPipe to extract hand landmarks from sign language videos and analyzes hand position, orientation, and motion. The extracted features are converted into time-series data and fed into a Long Short-Term Memory (LSTM) model. The proposed recognition framework achieved an accuracy of up to 81.25%, while the sentence generation achieved an accuracy of up to 95%. The proposed approach is expected to be applicable not only to Korean Sign Language but also to other low-resource sign languages for recognition and translation tasks.

Keywords

Korean sign language recognition; LSTM; data augmentation; sentence completion

Cite This Article

APA Style
Jeong, S., Jeong, H., Ihm, S. (2026). Korean Sign Language Recognition and Sentence Generation through Data Augmentation. Computers, Materials & Continua, 87(2), 87. https://doi.org/10.32604/cmc.2026.074016
Vancouver Style
Jeong S, Jeong H, Ihm S. Korean Sign Language Recognition and Sentence Generation through Data Augmentation. Comput Mater Contin. 2026;87(2):87. https://doi.org/10.32604/cmc.2026.074016
IEEE Style
S. Jeong, H. Jeong, and S. Ihm, “Korean Sign Language Recognition and Sentence Generation through Data Augmentation,” Comput. Mater. Contin., vol. 87, no. 2, pp. 87, 2026. https://doi.org/10.32604/cmc.2026.074016



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 459

    View

  • 231

    Download

  • 0

    Like

Share Link