Open Access
ARTICLE
When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation
1 College of Computer, Information and Communications Technology, Cebu Technological University, Corner M.J. Cuenco Avenue & R. Palma St., Cebu City, 6000, Philippines
2 Center for Applied Mathematics and Operations Research, Cebu Technological University, Corne, M.J. Cuenco Avenue & R. Palma St., Cebu City, 6000, Philippines
3 Centre for Operational Research and Logistics, University of Portsmouth, Portland Building, Portland Street, Portsmouth, PO1 3AH, UK
* Corresponding Author: Lanndon Ocampo. Email:
Computers, Materials & Continua 2026, 86(1), 1-26. https://doi.org/10.32604/cmc.2025.068104
Received 21 May 2025; Accepted 26 September 2025; Issue published 10 November 2025
Abstract
This study demonstrates a novel integration of large language models, machine learning, and multi-criteria decision-making to investigate self-moderation in small online communities, a topic under-explored compared to user behavior and platform-driven moderation on social media. The proposed methodological framework (1) utilizes large language models for social media post analysis and categorization, (2) employs k-means clustering for content characterization, and (3) incorporates the TODIM (Tomada de Decisão Interativa Multicritério) method to determine moderation strategies based on expert judgments. In general, the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems. When applied in social media moderation, this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location. The application of this framework is demonstrated within Facebook groups. Eight distinct content clusters encompassing safety, harassment, diversity, and misinformation are identified. Analysis revealed a preference for content removal across all clusters, suggesting a cautious approach towards potentially harmful content. However, the framework also highlights the use of other moderation actions, like account suspension, depending on the content category. These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.Keywords
Supplementary Material
Supplementary Material FileCite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools