Open Access iconOpen Access

ARTICLE

crossmark

When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation

Noreen Fuentes1, Janeth Ugang1, Narcisan Galamiton1, Suzette Bacus1, Samantha Shane Evangelista2, Fatima Maturan2, Lanndon Ocampo2,3,*

1 College of Computer, Information and Communications Technology, Cebu Technological University, Corner M.J. Cuenco Avenue & R. Palma St., Cebu City, 6000, Philippines
2 Center for Applied Mathematics and Operations Research, Cebu Technological University, Corne, M.J. Cuenco Avenue & R. Palma St., Cebu City, 6000, Philippines
3 Centre for Operational Research and Logistics, University of Portsmouth, Portland Building, Portland Street, Portsmouth, PO1 3AH, UK

* Corresponding Author: Lanndon Ocampo. Email: email

Computers, Materials & Continua 2026, 86(1), 1-26. https://doi.org/10.32604/cmc.2025.068104

Abstract

This study demonstrates a novel integration of large language models, machine learning, and multi-criteria decision-making to investigate self-moderation in small online communities, a topic under-explored compared to user behavior and platform-driven moderation on social media. The proposed methodological framework (1) utilizes large language models for social media post analysis and categorization, (2) employs k-means clustering for content characterization, and (3) incorporates the TODIM (Tomada de Decisão Interativa Multicritério) method to determine moderation strategies based on expert judgments. In general, the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems. When applied in social media moderation, this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location. The application of this framework is demonstrated within Facebook groups. Eight distinct content clusters encompassing safety, harassment, diversity, and misinformation are identified. Analysis revealed a preference for content removal across all clusters, suggesting a cautious approach towards potentially harmful content. However, the framework also highlights the use of other moderation actions, like account suspension, depending on the content category. These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.

Keywords

Self-moderation; user-generated content; k-means clustering; TODIM; large language models

Supplementary Material

Supplementary Material File

Cite This Article

APA Style
Fuentes, N., Ugang, J., Galamiton, N., Bacus, S., Evangelista, S.S. et al. (2026). When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation. Computers, Materials & Continua, 86(1), 1–26. https://doi.org/10.32604/cmc.2025.068104
Vancouver Style
Fuentes N, Ugang J, Galamiton N, Bacus S, Evangelista SS, Maturan F, et al. When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation. Comput Mater Contin. 2026;86(1):1–26. https://doi.org/10.32604/cmc.2025.068104
IEEE Style
N. Fuentes et al., “When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–26, 2026. https://doi.org/10.32604/cmc.2025.068104



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 759

    View

  • 107

    Download

  • 0

    Like

Share Link