Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.072889
Special Issues
Table of Content

Open Access

ARTICLE

IPKE-MoE: Mixture-of-Experts with Iterative Prompts and Knowledge-Enhanced LLM for Chinese Sensitive Words Detection

Longcang Wang, Yongbing Gao*, Xinguang Wang, Xin Liu
School of Digital and Intelligence Industry, Inner Mongolia University of Science and Technology, Baotou, 014010, China
* Corresponding Author: Yongbing Gao. Email: email
(This article belongs to the Special Issue: Sentiment Analysis for Social Media Data: Lexicon-Based and Large Language Model Approaches)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.072889

Received 05 September 2025; Accepted 12 November 2025; Published online 12 December 2025

Abstract

Aiming at the problem of insufficient recognition of implicit variants by existing Chinese sensitive text detection methods, this paper proposes the IPKE-MoE framework, which consists of three parts, namely, a sensitive word variant extraction framework, a sensitive word variant knowledge enhancement layer and a mixture-of-experts (MoE) classification layer. First, sensitive word variants are precisely extracted through dynamic iterative prompt templates and the context-aware capabilities of Large Language Models (LLMs). Next, the extracted variants are used to construct a knowledge enhancement layer for sensitive word variants based on RoCBert models. Specifically, after locating variants via n-gram algorithms, variant types are mapped to embedding vectors and fused with original word vectors. Finally, a mixture-of-experts (MoE) classification layer is designed (sensitive word, sentiment, and semantic experts), which decouples the relationship between sensitive word existence and text toxicity through multiple experts. This framework effectively combines the comprehension ability of Large Language Models (LLMs) with the discriminative ability of smaller models. Our two experiments demonstrate that the sensitive word variant extraction framework based on dynamically iterated prompt templates outperforms other baseline prompt templates. The RoCBert models incorporating the sensitive word variant knowledge enhancement layer and a mixture-of-experts (MoE) classification layer achieve superior classification performance compared to other baselines.

Keywords

Sensitive words variants detection; variant knowledge enhancement; LLM; MoE
  • 81

    View

  • 16

    Download

  • 0

    Like

Share Link