Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (145)
  • Open Access

    ARTICLE

    Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection

    Palak Bari, Gurnur Bedi, Khushi Joshi, Anupama Jawale*

    Journal on Artificial Intelligence, Vol.7, pp. 499-508, 2025, DOI:10.32604/jai.2025.072531 - 17 November 2025

    Abstract This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories, balanced across sarcastic and non-sarcastic classes. A sequential baseline model (LSTM) is compared with transformer-based models (RoBERTa and XLNet), integrated with attention mechanisms. Transformers were chosen for their proven ability to capture long-range contextual dependencies, whereas LSTM serves as a traditional benchmark for sequential modeling. Experimental results show that RoBERTa achieves 0.87 accuracy, XLNet 0.83, and LSTM 0.52. These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection. Future work will incorporate multimodal More >

  • Open Access

    ARTICLE

    GLAMSNet: A Gated-Linear Aspect-Aware Multimodal Sentiment Network with Alignment Supervision and External Knowledge Guidance

    Dan Wang1, Zhoubin Li1, Yuze Xia1,2,*, Zhenhua Yu1,*

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5823-5845, 2025, DOI:10.32604/cmc.2025.071656 - 23 October 2025

    Abstract Multimodal Aspect-Based Sentiment Analysis (MABSA) aims to detect sentiment polarity toward specific aspects by leveraging both textual and visual inputs. However, existing models suffer from weak aspect-image alignment, modality imbalance dominated by textual signals, and limited reasoning for implicit or ambiguous sentiments requiring external knowledge. To address these issues, we propose a unified framework named Gated-Linear Aspect-Aware Multimodal Sentiment Network (GLAMSNet). First of all, an input encoding module is employed to construct modality-specific and aspect-aware representations. Subsequently, we introduce an image–aspect correlation matching module to provide hierarchical supervision for visual-textual alignment. Building upon these components, More >

  • Open Access

    ARTICLE

    Enhanced Multimodal Sentiment Analysis via Integrated Spatial Position Encoding and Fusion Embedding

    Chenquan Gan1,2,*, Xu Liu1, Yu Tang2, Xianrong Yu3, Qingyi Zhu1, Deepak Kumar Jain4

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5399-5421, 2025, DOI:10.32604/cmc.2025.068126 - 23 October 2025

    Abstract Multimodal sentiment analysis aims to understand emotions from text, speech, and video data. However, current methods often overlook the dominant role of text and suffer from feature loss during integration. Given the varying importance of each modality across different contexts, a central and pressing challenge in multimodal sentiment analysis lies in maximizing the use of rich intra-modal features while minimizing information loss during the fusion process. In response to these critical limitations, we propose a novel framework that integrates spatial position encoding and fusion embedding modules to address these issues. In our model, text is… More >

  • Open Access

    ARTICLE

    AMSA: Adaptive Multi-Channel Image Sentiment Analysis Network with Focal Loss

    Xiaofang Jin, Yiran Li*, Yuying Yang

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5309-5326, 2025, DOI:10.32604/cmc.2025.067812 - 23 October 2025

    Abstract Given the importance of sentiment analysis in diverse environments, various methods are used for image sentiment analysis, including contextual sentiment analysis that utilizes character and scene relationships. However, most existing works employ character faces in conjunction with context, yet lack the capacity to analyze the emotions of characters in unconstrained environments, such as when their faces are obscured or blurred. Accordingly, this article presents the Adaptive Multi-Channel Sentiment Analysis Network (AMSA), a contextual image sentiment analysis framework, which consists of three channels: body, face, and context. AMSA employs Multi-task Cascaded Convolutional Networks (MTCNN) to detect More >

  • Open Access

    ARTICLE

    Deep Learning-Based NLP Framework for Public Sentiment Analysis on Green Consumption: Evidence from Social Media

    Luyu Ma1,*, Xiu Cheng1,*, Zongyan Xing1, Yue Wu1, Weiwei Jiang2

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3921-3943, 2025, DOI:10.32604/cmc.2025.067786 - 23 September 2025

    Abstract Green consumption (GC) are crucial for achieving the Sustainable Development Goals (SDGs). However, few studies have explored public attitudes toward GC using social media data, missing potential public concerns captured through big data. To address this gap, this study collects and analyzes public attention toward GC using web crawler technology. Based on the data from Sina Weibo, we applied RoBERTa, an advanced NLP model based on transformer architecture, to conduct fine-grained sentiment analysis of the public’s attention, attitudes and hot topics on GC, demonstrating the potential of deep learning methods in capturing dynamic and contextual… More >

  • Open Access

    ARTICLE

    TGICP: A Text-Gated Interaction Network with Inter-Sample Commonality Perception for Multimodal Sentiment Analysis

    Erlin Tian1, Shuai Zhao2,*, Min Huang2, Yushan Pan3,4, Yihong Wang3,4, Zuhe Li1

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1427-1456, 2025, DOI:10.32604/cmc.2025.066476 - 29 August 2025

    Abstract With the increasing importance of multimodal data in emotional expression on social media, mainstream methods for sentiment analysis have shifted from unimodal to multimodal approaches. However, the challenges of extracting high-quality emotional features and achieving effective interaction between different modalities remain two major obstacles in multimodal sentiment analysis. To address these challenges, this paper proposes a Text-Gated Interaction Network with Inter-Sample Commonality Perception (TGICP). Specifically, we utilize a Inter-sample Commonality Perception (ICP) module to extract common features from similar samples within the same modality, and use these common features to enhance the original features of… More >

  • Open Access

    REVIEW

    Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis: A Systematic Literature Review

    Jungpil Shin1,*, Wahidur Rahman2, Tanvir Ahmed2, Bakhtiar Mazrur2, Md. Mohsin Mia2, Romana Idress Ekfa2, Md. Sajib Rana2, Pankoo Kim3,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4105-4153, 2025, DOI:10.32604/cmc.2025.066910 - 30 July 2025

    Abstract Sentiment Analysis, a significant domain within Natural Language Processing (NLP), focuses on extracting and interpreting subjective information—such as emotions, opinions, and attitudes—from textual data. With the increasing volume of user-generated content on social media and digital platforms, sentiment analysis has become essential for deriving actionable insights across various sectors. This study presents a systematic literature review of sentiment analysis methodologies, encompassing traditional machine learning algorithms, lexicon-based approaches, and recent advancements in deep learning techniques. The review follows a structured protocol comprising three phases: planning, execution, and analysis/reporting. During the execution phase, 67 peer-reviewed articles were More >

  • Open Access

    ARTICLE

    Improving Fashion Sentiment Detection on X through Hybrid Transformers and RNNs

    Bandar Alotaibi1,*, Aljawhara Almutarie2, Shuaa Alotaibi3, Munif Alotaibi4

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4451-4467, 2025, DOI:10.32604/cmc.2025.066050 - 30 July 2025

    Abstract X (formerly known as Twitter) is one of the most prominent social media platforms, enabling users to share short messages (tweets) with the public or their followers. It serves various purposes, from real-time news dissemination and political discourse to trend spotting and consumer engagement. X has emerged as a key space for understanding shifting brand perceptions, consumer preferences, and product-related sentiment in the fashion industry. However, the platform’s informal, dynamic, and context-dependent language poses substantial challenges for sentiment analysis, mainly when attempting to detect sarcasm, slang, and nuanced emotional tones. This study introduces a hybrid… More >

  • Open Access

    ARTICLE

    Enhancing Arabic Sentiment Analysis with Pre-Trained CAMeLBERT: A Case Study on Noisy Texts

    Fay Aljomah, Lama Aldhafeeri, Maha Alfadel, Sultanh Alshahrani, Qaisar Abbas*, Sarah Alhumoud*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5317-5335, 2025, DOI:10.32604/cmc.2025.062478 - 30 July 2025

    Abstract Dialectal Arabic text classification (DA-TC) provides a mechanism for performing sentiment analysis on recent Arabic social media leading to many challenges owing to the natural morphology of the Arabic language and its wide range of dialect variations. The availability of annotated datasets is limited, and preprocessing of the noisy content is even more challenging, sometimes resulting in the removal of important cues of sentiment from the input. To overcome such problems, this study investigates the applicability of using transfer learning based on pre-trained transformer models to classify sentiment in Arabic texts with high accuracy. Specifically,… More >

  • Open Access

    ARTICLE

    Optimizing Sentiment Integration in Image Captioning Using Transformer-Based Fusion Strategies

    Komal Rani Narejo1, Hongying Zan1,*, Kheem Parkash Dharmani2, Orken Mamyrbayev3,*, Ainur Akhmediyarova4, Zhibek Alibiyeva4, Janna Alimkulova5

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 3407-3429, 2025, DOI:10.32604/cmc.2025.065872 - 03 July 2025

    Abstract While automatic image captioning systems have made notable progress in the past few years, generating captions that fully convey sentiment remains a considerable challenge. Although existing models achieve strong performance in visual recognition and factual description, they often fail to account for the emotional context that is naturally present in human-generated captions. To address this gap, we propose the Sentiment-Driven Caption Generator (SDCG), which combines transformer-based visual and textual processing with multi-level fusion. RoBERTa is used for extracting sentiment from textual input, while visual features are handled by the Vision Transformer (ViT). These features are More >

Displaying 1-10 on page 1 of 145. Per Page