Open Access iconOpen Access

ARTICLE

crossmark

Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks

Fangfang Shan1,2,*, Huifang Sun1,2, Mengyi Wang1,2

1 College of Computer, Zhongyuan University of Technology, Zhengzhou, 450007, China
2 Henan Key Laboratory of Cyberspace Situation Awareness, Zhengzhou 450001, China

* Corresponding Author: Fangfang Shan. Email: email

Computers, Materials & Continua 2024, 79(1), 581-605. https://doi.org/10.32604/cmc.2024.046202

Abstract

As social networks become increasingly complex, contemporary fake news often includes textual descriptions of events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely to create a misleading perception among users. While early research primarily focused on text-based features for fake news detection mechanisms, there has been relatively limited exploration of learning shared representations in multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal model for detecting fake news, which relies on similarity reasoning and adversarial networks. The model employs Bidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) to extract visual features. Subsequently, the model establishes similarity representations between the textual features extracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features are fused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigate the relationship between fake news and events. This paper validates the proposed model using publicly available multimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approach achieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodal modal models and existing multimodal models. In contrast, the overall better performance of our model on the Weibo dataset surpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarial networks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However, current research is limited to the fusion of only text and image modalities. Future research directions should aim to further integrate features from additional modalities to comprehensively represent the multifaceted information of fake news.

Keywords


Cite This Article

APA Style
Shan, F., Sun, H., Wang, M. (2024). Multimodal social media fake news detection based on similarity inference and adversarial networks. Computers, Materials & Continua, 79(1), 581-605. https://doi.org/10.32604/cmc.2024.046202
Vancouver Style
Shan F, Sun H, Wang M. Multimodal social media fake news detection based on similarity inference and adversarial networks. Comput Mater Contin. 2024;79(1):581-605 https://doi.org/10.32604/cmc.2024.046202
IEEE Style
F. Shan, H. Sun, and M. Wang "Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks," Comput. Mater. Contin., vol. 79, no. 1, pp. 581-605. 2024. https://doi.org/10.32604/cmc.2024.046202



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 186

    View

  • 177

    Download

  • 0

    Like

Share Link