Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation

    Kai Jiang, Bin Cao*, Jing Fan

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 2965-2984, 2024, DOI:10.32604/cmes.2023.046348

    Abstract Multimodal sentiment analysis utilizes multimodal data such as text, facial expressions and voice to detect people’s attitudes. With the advent of distributed data collection and annotation, we can easily obtain and share such multimodal data. However, due to professional discrepancies among annotators and lax quality control, noisy labels might be introduced. Recent research suggests that deep neural networks (DNNs) will overfit noisy labels, leading to the poor performance of the DNNs. To address this challenging problem, we present a Multimodal Robust Meta Learning framework (MRML) for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously. Specifically, we… More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Based on a Cross-Modal Multihead Attention Mechanism

    Lujuan Deng, Boyi Liu*, Zuhe Li

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 1157-1170, 2024, DOI:10.32604/cmc.2023.042150

    Abstract Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data. Concatenating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method. This fusion method does not utilize the correlation information between modalities. To solve this problem, this paper proposes a model based on a multi-head attention mechanism. First, after preprocessing the original data. Then, the feature representation is converted into a sequence of word vectors and positional encoding is introduced to better understand the semantic and sequential information in the input sequence. Next, the input coding sequence is fed into the transformer model for further… More >

  • Open Access

    ARTICLE

    Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis

    Jieyu An1,*, Wan Mohd Nazmee Wan Zainon1, Binfen Ding2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1673-1689, 2023, DOI:10.32604/iasc.2023.039763

    Abstract Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes, such as text and image, to accurately assess sentiment. However, conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities. This limitation is attributed to their training on unimodal data, and necessitates the use of complex fusion mechanisms for sentiment analysis. In this study, we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method. Our approach harnesses the power of transfer learning… More >

  • Open Access

    ARTICLE

    Multimodal Sentiment Analysis Using BiGRU and Attention-Based Hybrid Fusion Strategy

    Zhizhong Liu*, Bin Zhou, Lingqiang Meng, Guangyu Huang

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1963-1981, 2023, DOI:10.32604/iasc.2023.038835

    Abstract Recently, multimodal sentiment analysis has increasingly attracted attention with the popularity of complementary data streams, which has great potential to surpass unimodal sentiment analysis. One challenge of multimodal sentiment analysis is how to design an efficient multimodal feature fusion strategy. Unfortunately, existing work always considers feature-level fusion or decision-level fusion, and few research works focus on hybrid fusion strategies that contain feature-level fusion and decision-level fusion. To improve the performance of multimodal sentiment analysis, we present a novel multimodal sentiment analysis model using BiGRU and attention-based hybrid fusion strategy (BAHFS). Firstly, we apply BiGRU to learn the unimodal features of… More >

Displaying 1-10 on page 1 of 4. Per Page