Open Access iconOpen Access

REVIEW

Natural Language Processing with Transformer-Based Models: A Meta-Analysis

Charles Munyao*, John Ndia

School of Computing and Information Technology, Murang’a University of Technology, Murang’a, 75-10200, Kenya

* Corresponding Author: Charles Munyao. Email: email

Journal on Artificial Intelligence 2025, 7, 329-346. https://doi.org/10.32604/jai.2025.069226

Abstract

The natural language processing (NLP) domain has witnessed significant advancements with the emergence of transformer-based models, which have reshaped the text understanding and generation landscape. While their capabilities are well recognized, there remains a limited systematic synthesis of how these models perform across tasks, scale efficiently, adapt to domains, and address ethical challenges. Therefore, the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks, their scalability, domain adaptation, and the ethical implications of such models. This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models, adhering to the PRISMA framework. Relevant papers were sourced from electronic databases, including IEEE Xplore, Springer, ACM Digital Library, Elsevier, PubMed, and Google Scholar. The findings highlight the superior performance of transformers over conventional approaches, attributed to self-attention mechanisms and pre-trained language representations. Despite these advantages, challenges such as high computational costs, data bias, and hallucination persist. The study provides new perspectives by underscoring the necessity for future research to optimize transformer architectures for efficiency, address ethical AI concerns, and enhance generalization across languages. This paper contributes valuable insights into the current trends, limitations, and potential improvements in transformer-based models for NLP.

Keywords

Natural language processing; transformers; pretrained language representations; self-attention mechanisms; ethical AI

Supplementary Material

Supplementary Material File

Cite This Article

APA Style
Munyao, C., Ndia, J. (2025). Natural Language Processing with Transformer-Based Models: A Meta-Analysis. Journal on Artificial Intelligence, 7(1), 329–346. https://doi.org/10.32604/jai.2025.069226
Vancouver Style
Munyao C, Ndia J. Natural Language Processing with Transformer-Based Models: A Meta-Analysis. J Artif Intell. 2025;7(1):329–346. https://doi.org/10.32604/jai.2025.069226
IEEE Style
C. Munyao and J. Ndia, “Natural Language Processing with Transformer-Based Models: A Meta-Analysis,” J. Artif. Intell., vol. 7, no. 1, pp. 329–346, 2025. https://doi.org/10.32604/jai.2025.069226



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 3008

    View

  • 2003

    Download

  • 0

    Like

Share Link