Special Issues
Table of Content

Security and Robustness of Large Language Models (LLMs)

Submission Deadline: 01 December 2025 (closed) View: 1184 Submit to Journal

Guest Editors

Dr. Tinghui Ouyang

Email: ouyang.tinghui.gb@u.tsukuba.ac.jp

Affiliation: Center for Computational Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan

Homepage:

Research Interests: Data sciences, Machine learning, LLM, Anomaly detection


Summary

Large Language Models (LLMs) have achieved remarkable performance across a wide range of natural language processing tasks. However, their widespread deployment also introduces significant security and robustness concerns, including hallucination, textual out-of-distribution (OOD) detection failures, adversarial vulnerabilities, data poisoning, and privacy leakage. This special issue aims to bring together cutting-edge research that addresses these challenges, ensuring the safe and secure use of LLMs in real-world applications.


We invite high-quality, original research papers and review articles addressing, but not limited to, the following topics:
· Hallucination in LLMs: Understanding and mitigating fabricated or misleading content generation in LLMs.
· Textual OOD Detection and Defense: Identifying and mitigating issues when LLMs encounter out-of-distribution inputs.
· Adversarial Attacks and Robustness: Techniques for detecting, defending, and mitigating adversarial manipulation of LLM-generated content.
· Poisoning Attacks on Training Data: Understanding how malicious data injections impact model behavior and exploring countermeasures.
· Privacy and Confidentiality Risks: Analyzing risks related to unintentional leakage of sensitive or proprietary information.
· Trustworthy AI for LLMs: Developing explainable, interpretable, and auditable frameworks to enhance LLM security.
· Secure Model Fine-Tuning and Deployment: Ensuring secure transfer learning, reinforcement learning, and continual learning in LLMs.
· Evaluation Metrics and Benchmarks: Designing robust security evaluation frameworks for LLM safety and performance.


Keywords

LLM, security, robustness analysis, trustworthy AI, quality management,

Published Papers


  • Open Access

    ARTICLE

    Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges

    Ksenia Kharitonova, David Pérez-Fernández, Zoraida Callejas, David Griol
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2026.075777
    (This article belongs to the Special Issue: Security and Robustness of Large Language Models (LLMs))
    Abstract Building reliable intent-based, task-oriented dialog systems typically requires substantial manual effort: designers must derive intents, entities, responses, and control logic from raw conversational data, then iterate until the assistant behaves consistently. This paper investigates how far large language models (LLMs) can automate this development. In this paper, we use two reference corpora, Let’s Go (English, public transport) and MEDIA (French, hotel booking), to prompt four LLM families (GPT-4o, Claude, Gemini, Mistral Small) and generate the core specifications required by the rasa platform. These include intent sets with example utterances, entity definitions with slot mappings, response templates,… More >

Share Link