Open Access
ARTICLE
LLM-Powered Multimodal Reasoning for Fake News Detection
1 Department of Software Engineering, University of Frontier Technology, Gazipur, 1750, Bangladesh
2 Department of Computer Science and Engineering, Sunamgonj Science and Technology University, Sunamganj, 3000, Bangladesh
3 Department of Computer Science and Engineering, American International University-Bangladesh, Dhaka, 1229, Bangladesh
4 Center for Advanced Analytics (CAA), COE for Artificial Intelligence, Faculty of Engineering & Technology (FET), Multimedia University, Melaka, 75450, Malaysia
* Corresponding Authors: M. F. Mridha. Email: ; Md. Jakir Hossen. Email:
(This article belongs to the Special Issue: Visual and Large Language Models for Generalized Applications)
Computers, Materials & Continua 2026, 87(1), 77 https://doi.org/10.32604/cmc.2025.070235
Received 11 July 2025; Accepted 09 September 2025; Issue published 10 February 2026
Abstract
The problem of fake news detection (FND) is becoming increasingly important in the field of natural language processing (NLP) because of the rapid dissemination of misleading information on the web. Large language models (LLMs) such as GPT-4. Zero excels in natural language understanding tasks but can still struggle to distinguish between fact and fiction, particularly when applied in the wild. However, a key challenge of existing FND methods is that they only consider unimodal data (e.g., images), while more detailed multimodal data (e.g., user behaviour, temporal dynamics) is neglected, and the latter is crucial for full-context understanding. To overcome these limitations, we introduce M3-FND (Multimodal Misinformation Mitigation for False News Detection), a novel methodological framework that integrates LLMs with multimodal data sources to perform context-aware veracity assessments. Our method proposes a hybrid system that combines image-text alignment, user credibility profiling, and temporal pattern recognition, which is also strengthened through a natural feedback loop that provides real-time feedback for correcting downstream errors. We use contextual reinforcement learning to schedule prompt updating and update the classifier threshold based on the latest multimodal input, which enables the model to better adapt to changing misinformation attack strategies. M3-FND is tested on three diverse datasets, FakeNewsNet, Twitter15, and Weibo, which contain both text and visual social media content. Experiments show that M3-FND significantly outperforms conventional and LLM-based baselines in terms of accuracy, F1-score, and AUC on all benchmarks. Our results indicate the importance of employing multimodal cues and adaptive learning for effective and timely detection of fake news.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools