Special Issues
Table of Content

Applied NLP with Large Language Models: AI Applications Across Domains

Submission Deadline: 31 May 2026 View: 532 Submit to Special Issue

Guest Editors

Prof. Dr. Hsien-Tsung Chang

Email: smallpig@cgu.edu.tw

Affiliation: Bachelor Program in Artificial Intelligence, Chang Gung University, Tao-Yuan, 333, Taiwan

Homepage:

Research Interests: artificial intelligence, natural language processing, affective computing, LLM applications, information retrieval, search engine

图片1.png


Dr. Ammar Amjad

Email: ammar@nycu.edu.tw

Affiliation: Department of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, 112304, Taiwan

Homepage:

Research Interests: speech processing, natural language processing, deep learning, affective computing, and signal processing for human–computer interaction

2.png


Prof. Dr. Lal Khan

Email: lalkhan@gachon.ac.kr

Affiliation: Department of AI and SW, Gachon University, Seongnam, 13120, Repuiblic of Korea

Homepage:  https://sites.google.com/view/hcilab/members

Research Interests: AI, NLP, sentiment anaclasis, LLM

3.png


Summary

Applied NLP with Large Language Models explores how advanced natural language processing techniques, powered by large-scale pretrained models, can be applied to diverse domains. It covers computational solutions for understanding, generating, and reasoning with natural language, enabling applications such as intelligent information retrieval, text summarization, machine translation, knowledge extraction, decision support, multimodal integration, and human-AI collaboration.


Research areas include prompt engineering, retrieval-augmented generation, domain-specific fine-tuning, interpretability, ethical and trustworthy AI, multilingual processing, and cross-disciplinary applications of LLMs. This Special Issue emphasizes both theoretical advancements in applied NLP with LLMs and practical case studies that demonstrate their impact across industries and sciences.


Potential application areas span engineering, healthcare, education, business, law, and the social sciences, including medical decision support, automated report generation, educational technology, legal document analysis, business intelligence, smart governance, and more.


This Special Issue aims to gather the latest research results and solutions in applied NLP with LLMs. Both methodological and application-driven studies are welcome, particularly those with technical depth or emerging cross-domain applications.


· Prompt Engineering and Optimization
· Retrieval-Augmented Generation (RAG)
· Domain-Specific Fine-Tuning of LLMs
· Multilingual NLP with LLMs
· Interpretability and Explainability of LLMs
· Ethical, Responsible, and Trustworthy NLP Systems
· Human-AI Collaboration in Text Understanding and Generation
· Knowledge Graph Integration with LLMs
· LLM Applications
· Multimodal Applications with LLMs
· Cross-Domain Case Studies


Keywords

large language models, natural language processing, prompt engineering, retrieval-augmented generation, domain adaptation, human-AI collaboration, multimodal AI.

Published Papers


  • Open Access

    ARTICLE

    LLM-Based Enhanced Clustering for Low-Resource Language: An Empirical Study

    Talha Farooq Khan, Majid Hussain, Muhammad Arslan, Muhammad Saeed, Lal Khan, Hsien-Tsung Chang
    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3883-3911, 2025, DOI:10.32604/cmes.2025.073021
    (This article belongs to the Special Issue: Applied NLP with Large Language Models: AI Applications Across Domains)
    Abstract Text clustering is an important task because of its vital role in NLP-related tasks. However, existing research on clustering is mainly based on the English language, with limited work on low-resource languages, such as Urdu. Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity. The primary aim of this paper is twofold: (1) By introducing a clustering dataset named UNC-2025 comprises 100k Urdu news documents, and (2) a detailed empirical standard of Large Language Model (LLM) improved clustering methods for Urdu text. We explicitly evaluate the… More >

Share Link