Open Access iconOpen Access

ARTICLE

Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification

Israa K. Salman Al-Tameemi1,3, Mohammad-Reza Feizi-Derakhshi1,*, Saeed Pashazadeh2, Mohammad Asadpour2

1 Computerized Intelligence Systems Laboratory, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, 51368, Iran
2 Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, 51368, Iran
3 State Company for Engineering Rehabilitation and Testing, Iraqi Ministry of Industry and Minerals, Baghdad, 10011, Iraq

* Corresponding Author: Mohammad-Reza Feizi-Derakhshi. Email: email

Computers, Materials & Continua 2023, 76(2), 2145-2177. https://doi.org/10.32604/cmc.2023.040997

Abstract

Multimodal Sentiment Analysis (SA) is gaining popularity due to its broad application potential. The existing studies have focused on the SA of single modalities, such as texts or photos, posing challenges in effectively handling social media data with multiple modalities. Moreover, most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations, leading to unsatisfactory sentiment classification results. Motivated by this, we propose a new visual-textual sentiment classification model named Multi-Model Fusion (MMF), which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content. The proposed model comprises three deep neural networks. Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data. Thus, more discriminative features are gathered for accurate sentiment classification. Then, a multichannel joint fusion model with a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification. Finally, the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model. An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model (LIME) to ensure the model’s explainability and resilience. The proposed MMF model has been tested on four real-world sentiment datasets, achieving (99.78%) accuracy on Binary_Getty (BG), (99.12%) on Binary_iStock (BIS), (95.70%) on Twitter, and (79.06%) on the Multi-View Sentiment Analysis (MVSA) dataset. These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.

Keywords


Cite This Article

APA Style
Al-Tameemi, I.K.S., Feizi-Derakhshi, M., Pashazadeh, S., Asadpour, M. (2023). Multi-model fusion framework using deep learning for visual-textual sentiment classification. Computers, Materials & Continua, 76(2), 2145-2177. https://doi.org/10.32604/cmc.2023.040997
Vancouver Style
Al-Tameemi IKS, Feizi-Derakhshi M, Pashazadeh S, Asadpour M. Multi-model fusion framework using deep learning for visual-textual sentiment classification. Comput Mater Contin. 2023;76(2):2145-2177 https://doi.org/10.32604/cmc.2023.040997
IEEE Style
I.K.S. Al-Tameemi, M. Feizi-Derakhshi, S. Pashazadeh, and M. Asadpour "Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification," Comput. Mater. Contin., vol. 76, no. 2, pp. 2145-2177. 2023. https://doi.org/10.32604/cmc.2023.040997



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 499

    View

  • 240

    Download

  • 0

    Like

Share Link