Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (8)
  • Open Access

    ARTICLE

    A Multimodal Sentiment Analysis Method Based on Multi-Granularity Guided Fusion

    Zilin Zhang1, Yan Liu1,*, Jia Liu2, Senbao Hou3, Yuping Zhang1, Chenyuan Wang1

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-14, 2026, DOI:10.32604/cmc.2025.072286 - 09 December 2025

    Abstract With the growing demand for more comprehensive and nuanced sentiment understanding, Multimodal Sentiment Analysis (MSA) has gained significant traction in recent years and continues to attract widespread attention in the academic community. Despite notable advances, existing approaches still face critical challenges in both information modeling and modality fusion. On one hand, many current methods rely heavily on encoders to extract global features from each modality, which limits their ability to capture latent fine-grained emotional cues within modalities. On the other hand, prevailing fusion strategies often lack mechanisms to model semantic discrepancies across modalities and to… More >

  • Open Access

    ARTICLE

    Enhanced Multimodal Sentiment Analysis via Integrated Spatial Position Encoding and Fusion Embedding

    Chenquan Gan1,2,*, Xu Liu1, Yu Tang2, Xianrong Yu3, Qingyi Zhu1, Deepak Kumar Jain4

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 5399-5421, 2025, DOI:10.32604/cmc.2025.068126 - 23 October 2025

    Abstract Multimodal sentiment analysis aims to understand emotions from text, speech, and video data. However, current methods often overlook the dominant role of text and suffer from feature loss during integration. Given the varying importance of each modality across different contexts, a central and pressing challenge in multimodal sentiment analysis lies in maximizing the use of rich intra-modal features while minimizing information loss during the fusion process. In response to these critical limitations, we propose a novel framework that integrates spatial position encoding and fusion embedding modules to address these issues. In our model, text is… More >

  • Open Access

    ARTICLE

    TGICP: A Text-Gated Interaction Network with Inter-Sample Commonality Perception for Multimodal Sentiment Analysis

    Erlin Tian1, Shuai Zhao2,*, Min Huang2, Yushan Pan3,4, Yihong Wang3,4, Zuhe Li1

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1427-1456, 2025, DOI:10.32604/cmc.2025.066476 - 29 August 2025

    Abstract With the increasing importance of multimodal data in emotional expression on social media, mainstream methods for sentiment analysis have shifted from unimodal to multimodal approaches. However, the challenges of extracting high-quality emotional features and achieving effective interaction between different modalities remain two major obstacles in multimodal sentiment analysis. To address these challenges, this paper proposes a Text-Gated Interaction Network with Inter-Sample Commonality Perception (TGICP). Specifically, we utilize a Inter-sample Commonality Perception (ICP) module to extract common features from similar samples within the same modality, and use these common features to enhance the original features of… More >

  • Open Access

    ARTICLE

    Text-Image Feature Fine-Grained Learning for Joint Multimodal Aspect-Based Sentiment Analysis

    Tianzhi Zhang1, Gang Zhou1,*, Shuang Zhang2, Shunhang Li1, Yepeng Sun1, Qiankun Pi1, Shuo Liu3

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 279-305, 2025, DOI:10.32604/cmc.2024.055943 - 03 January 2025

    Abstract Joint Multimodal Aspect-based Sentiment Analysis (JMASA) is a significant task in the research of multimodal fine-grained sentiment analysis, which combines two subtasks: Multimodal Aspect Term Extraction (MATE) and Multimodal Aspect-oriented Sentiment Classification (MASC). Currently, most existing models for JMASA only perform text and image feature encoding from a basic level, but often neglect the in-depth analysis of unimodal intrinsic features, which may lead to the low accuracy of aspect term extraction and the poor ability of sentiment prediction due to the insufficient learning of intra-modal features. Given this problem, we propose a Text-Image Feature Fine-grained… More >

  • Open Access

    ARTICLE

    A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation

    Kai Jiang, Bin Cao*, Jing Fan

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2965-2984, 2024, DOI:10.32604/cmes.2023.046348 - 11 March 2024

    Abstract Multimodal sentiment analysis utilizes multimodal data such as text, facial expressions and voice to detect people’s attitudes. With the advent of distributed data collection and annotation, we can easily obtain and share such multimodal data. However, due to professional discrepancies among annotators and lax quality control, noisy labels might be introduced. Recent research suggests that deep neural networks (DNNs) will overfit noisy labels, leading to the poor performance of the DNNs. To address this challenging problem, we present a Multimodal Robust Meta Learning framework (MRML) for multimodal sentiment analysis to resist noisy labels and correlate More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Based on a Cross-Modal Multihead Attention Mechanism

    Lujuan Deng, Boyi Liu*, Zuhe Li

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 1157-1170, 2024, DOI:10.32604/cmc.2023.042150 - 30 January 2024

    Abstract Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data. Concatenating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method. This fusion method does not utilize the correlation information between modalities. To solve this problem, this paper proposes a model based on a multi-head attention mechanism. First, after preprocessing the original data. Then, the feature representation is converted into a sequence of word vectors and positional encoding is introduced to better understand the semantic and sequential information in the input sequence. Next, the input coding sequence is fed into… More >

  • Open Access

    ARTICLE

    Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis

    Jieyu An1,*, Wan Mohd Nazmee Wan Zainon1, Binfen Ding2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1673-1689, 2023, DOI:10.32604/iasc.2023.039763 - 21 June 2023

    Abstract Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes, such as text and image, to accurately assess sentiment. However, conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities. This limitation is attributed to their training on unimodal data, and necessitates the use of complex fusion mechanisms for sentiment analysis. In this study, we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method. Our approach harnesses… More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Using BiGRU and Attention-Based Hybrid Fusion Strategy

    Zhizhong Liu*, Bin Zhou, Lingqiang Meng, Guangyu Huang

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1963-1981, 2023, DOI:10.32604/iasc.2023.038835 - 21 June 2023

    Abstract Recently, multimodal sentiment analysis has increasingly attracted attention with the popularity of complementary data streams, which has great potential to surpass unimodal sentiment analysis. One challenge of multimodal sentiment analysis is how to design an efficient multimodal feature fusion strategy. Unfortunately, existing work always considers feature-level fusion or decision-level fusion, and few research works focus on hybrid fusion strategies that contain feature-level fusion and decision-level fusion. To improve the performance of multimodal sentiment analysis, we present a novel multimodal sentiment analysis model using BiGRU and attention-based hybrid fusion strategy (BAHFS). Firstly, we apply BiGRU to More >

Displaying 1-10 on page 1 of 8. Per Page