Open Access iconOpen Access

ARTICLE

crossmark

Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection

Palak Bari, Gurnur Bedi, Khushi Joshi, Anupama Jawale*

Department of Information Technology, Narsee Monjee College of Commerce and Economics, Mumbai, 400056, Maharashtra, India

* Corresponding Author: Anupama Jawale. Email: email

Journal on Artificial Intelligence 2025, 7, 499-508. https://doi.org/10.32604/jai.2025.072531

Abstract

This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories, balanced across sarcastic and non-sarcastic classes. A sequential baseline model (LSTM) is compared with transformer-based models (RoBERTa and XLNet), integrated with attention mechanisms. Transformers were chosen for their proven ability to capture long-range contextual dependencies, whereas LSTM serves as a traditional benchmark for sequential modeling. Experimental results show that RoBERTa achieves 0.87 accuracy, XLNet 0.83, and LSTM 0.52. These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection. Future work will incorporate multimodal features and error analysis to further improve robustness.

Keywords

Attention mechanism; LSTM; natural language processing; sarcasm detection; sentiment analysis; transformer models; RoBERTa; XLNet

Cite This Article

APA Style
Bari, P., Bedi, G., Joshi, K., Jawale, A. (2025). Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection. Journal on Artificial Intelligence, 7(1), 499–508. https://doi.org/10.32604/jai.2025.072531
Vancouver Style
Bari P, Bedi G, Joshi K, Jawale A. Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection. J Artif Intell. 2025;7(1):499–508. https://doi.org/10.32604/jai.2025.072531
IEEE Style
P. Bari, G. Bedi, K. Joshi, and A. Jawale, “Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection,” J. Artif. Intell., vol. 7, no. 1, pp. 499–508, 2025. https://doi.org/10.32604/jai.2025.072531



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 350

    View

  • 162

    Download

  • 0

    Like

Share Link