TY - EJOU AU - Bari, Palak AU - Bedi, Gurnur AU - Joshi, Khushi AU - Jawale, Anupama TI - Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection T2 - Journal on Artificial Intelligence PY - 2025 VL - 7 IS - 1 SN - 2579-003X AB - This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories, balanced across sarcastic and non-sarcastic classes. A sequential baseline model (LSTM) is compared with transformer-based models (RoBERTa and XLNet), integrated with attention mechanisms. Transformers were chosen for their proven ability to capture long-range contextual dependencies, whereas LSTM serves as a traditional benchmark for sequential modeling. Experimental results show that RoBERTa achieves 0.87 accuracy, XLNet 0.83, and LSTM 0.52. These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection. Future work will incorporate multimodal features and error analysis to further improve robustness. KW - Attention mechanism; LSTM; natural language processing; sarcasm detection; sentiment analysis; transformer models; RoBERTa; XLNet DO - 10.32604/jai.2025.072531