Open Access iconOpen Access

ARTICLE

Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges

Ksenia Kharitonova1, David Pérez-Fernández2, Zoraida Callejas1,3, David Griol1,3,*

1 Department of Software Engineering, University of Granada, Granada, Spain
2 Department of Mathematics, Universidad Autónoma de Madrid, Madrid, Spain
3 Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain

* Corresponding Author: David Griol. Email: email

(This article belongs to the Special Issue: Security and Robustness of Large Language Models (LLMs))

Computers, Materials & Continua 2026, 87(2), 43 https://doi.org/10.32604/cmc.2026.075777

Abstract

Building reliable intent-based, task-oriented dialog systems typically requires substantial manual effort: designers must derive intents, entities, responses, and control logic from raw conversational data, then iterate until the assistant behaves consistently. This paper investigates how far large language models (LLMs) can automate this development. In this paper, we use two reference corpora, Let’s Go (English, public transport) and MEDIA (French, hotel booking), to prompt four LLM families (GPT-4o, Claude, Gemini, Mistral Small) and generate the core specifications required by the rasa platform. These include intent sets with example utterances, entity definitions with slot mappings, response templates, and basic dialog flows. To structure this process, we introduce a model- and platform-agnostic pipeline with two phases. The first normalizes and validates LLM-generated artifacts, enforcing cross-file consistency and making slot usage explicit. The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably. Across eight projects, all models required some targeted repairs before training. After applying our pipeline, all reached 70% task completion (many above 84%), while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth. These results show that, with modest guidance, current LLMs can produce workable end-to-end dialog prototypes directly from raw transcripts. Our main contributions are: (i) a reusable bootstrap method aligned with industry domain-specific languages (DSLs), (ii) a small set of high-impact corrective patterns, and (iii) a simple but effective harness for closed-loop refinement across conversational platforms.

Keywords

Task-oriented dialog systems; large language models (LLMs); RASA; dialog automation; natural language understanding (NLU); slot filling; conversational AI; human-in-the-loop NLP

Cite This Article

APA Style
Kharitonova, K., Pérez-Fernández, D., Callejas, Z., Griol, D. (2026). Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges. Computers, Materials & Continua, 87(2), 43. https://doi.org/10.32604/cmc.2026.075777
Vancouver Style
Kharitonova K, Pérez-Fernández D, Callejas Z, Griol D. Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges. Comput Mater Contin. 2026;87(2):43. https://doi.org/10.32604/cmc.2026.075777
IEEE Style
K. Kharitonova, D. Pérez-Fernández, Z. Callejas, and D. Griol, “Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges,” Comput. Mater. Contin., vol. 87, no. 2, pp. 43, 2026. https://doi.org/10.32604/cmc.2026.075777



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 393

    View

  • 79

    Download

  • 0

    Like

Share Link