Open Access
ARTICLE
Gloss-Internal Graph Construction and Encoding for Sign Language Translation
1 Department of Computer Science, Swinburne Vietnam, FPT University, Ho Chi Minh City, Vietnam
2 Bellini College of Artificial Intelligence, Cybersecurity and Computing, University of South Florida, Tampa, FL, USA
* Corresponding Author: Sam Nguyen-Xuan. Email:
Computers, Materials & Continua 2026, 88(1), 59 https://doi.org/10.32604/cmc.2026.078727
Received 06 January 2026; Accepted 13 March 2026; Issue published 08 May 2026
Abstract
We propose a Gloss-Internal Graph Construction and Encoding framework that represents compound glosses as directed, labeled graphs and integrates them into a Transformer via a graph-aware encoder. We evaluate our approach against Rule-Based Gloss Decomposition (RBGD) and Linear Gloss Sequence Encoding (LGSE) baselines on ASLG-PC12 and PHOENIX-2014T. Results show consistent improvements over both baselines, achieving gains of up to +3.2 BLEU-4 over LGSE and +7.0 BLEU-4 over RBGD on ASLG-PC12. On PHOENIX-2014T, our method yields gains of up to 1.9 BLEU-4 on the development set and 2.4 BLEU-4 on the test set. Ablation studies further indicate that agreement and reference edges contribute most to translation quality, that attention pooling outperforms mean pooling for graph-level aggregation, and that a single message-passing step offers a reasonable accuracy–efficiency trade-off for the compact gloss-internal graphs encountered in practice. These results suggest that explicit modeling of gloss-internal structure is a promising direction for sign language translation.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools