Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.070235
Special Issues
Table of Content

Open Access

ARTICLE

LLM-Powered Multimodal Reasoning for Fake News Detection

Md. Ahsan Habib1, Md. Anwar Hussen Wadud2, M. F. Mridha3,*, Md. Jakir Hossen4,*
1 Department of Software Engineering, University of Frontier Technology, Gazipur, 1750, Bangladesh
2 Department of Computer Science and Engineering, Sunamgonj Science and Technology University, Sunamganj, 3000, Bangladesh
3 Department of Computer Science and Engineering, American International University-Bangladesh, Dhaka, 1229, Bangladesh
4 Center for Advanced Analytics (CAA), COE for Artificial Intelligence, Faculty of Engineering & Technology (FET), Multimedia University, Melaka, 75450, Malaysia
* Corresponding Author: M. F. Mridha. Email: email; Md. Jakir Hossen. Email: email
(This article belongs to the Special Issue: Visual and Large Language Models for Generalized Applications)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.070235

Received 11 July 2025; Accepted 09 September 2025; Published online 05 January 2026

Abstract

The problem of fake news detection (FND) is becoming increasingly important in the field of natural language processing (NLP) because of the rapid dissemination of misleading information on the web. Large language models (LLMs) such as GPT-4. Zero excels in natural language understanding tasks but can still struggle to distinguish between fact and fiction, particularly when applied in the wild. However, a key challenge of existing FND methods is that they only consider unimodal data (e.g., images), while more detailed multimodal data (e.g., user behaviour, temporal dynamics) is neglected, and the latter is crucial for full-context understanding. To overcome these limitations, we introduce M3-FND (Multimodal Misinformation Mitigation for False News Detection), a novel methodological framework that integrates LLMs with multimodal data sources to perform context-aware veracity assessments. Our method proposes a hybrid system that combines image-text alignment, user credibility profiling, and temporal pattern recognition, which is also strengthened through a natural feedback loop that provides real-time feedback for correcting downstream errors. We use contextual reinforcement learning to schedule prompt updating and update the classifier threshold based on the latest multimodal input, which enables the model to better adapt to changing misinformation attack strategies. M3-FND is tested on three diverse datasets, FakeNewsNet, Twitter15, and Weibo, which contain both text and visual social media content. Experiments show that M3-FND significantly outperforms conventional and LLM-based baselines in terms of accuracy, F1-score, and AUC on all benchmarks. Our results indicate the importance of employing multimodal cues and adaptive learning for effective and timely detection of fake news.

Keywords

Fake news detection; multimodal learning; large language models; prompt engineering; instruction tuning; reinforcement learning; misinformation mitigation
  • 189

    View

  • 105

    Download

  • 0

    Like

Share Link