Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.078727
Special Issues
Table of Content

Open Access

ARTICLE

Gloss-Internal Graph Construction and Encoding for Sign Language Translation

Sam Nguyen-Xuan1,*, Han Nguyen2
1 Department of Computer Science, Swinburne Vietnam, FPT University, Ho Chi Minh City, Vietnam
2 Bellini College of Artificial Intelligence, Cybersecurity and Computing, University of South Florida, Tampa, FL, USA
* Corresponding Author: Sam Nguyen-Xuan. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.078727

Received 06 January 2026; Accepted 13 March 2026; Published online 13 April 2026

Abstract

We propose a Gloss-Internal Graph Construction and Encoding framework that represents compound glosses as directed, labeled graphs and integrates them into a Transformer via a graph-aware encoder. We evaluate our approach against Rule-Based Gloss Decomposition (RBGD) and Linear Gloss Sequence Encoding (LGSE) baselines on ASLG-PC12 and PHOENIX-2014T. Results show consistent improvements over both baselines, achieving gains of up to +3.2 BLEU-4 over LGSE and +7.0 BLEU-4 over RBGD on ASLG-PC12. On PHOENIX-2014T, our method yields gains of up to 1.9 BLEU-4 on the development set and 2.4 BLEU-4 on the test set. Ablation studies further indicate that agreement and reference edges contribute most to translation quality, that attention pooling outperforms mean pooling for graph-level aggregation, and that a single message-passing step offers a reasonable accuracy–efficiency trade-off for the compact gloss-internal graphs encountered in practice. These results suggest that explicit modeling of gloss-internal structure is a promising direction for sign language translation.

Keywords

Sign language translation; gloss-to-Text translation; gloss-internal graph; sign language gloss; transformer-based models
  • 316

    View

  • 136

    Download

  • 0

    Like

Share Link