Submission Deadline: 20 October 2026 View: 153 Submit to Special Issue
Prof. Dr. Jehyeok Rew
Email: jhrew@duksung.ac.kr
Affiliation: Department of Data Science, Duksung Women's University, Seoul, Republic of Korea
Research Interests: large language models (LLMs), natural language processing (NLP), information retrieval (IR), retrieval-augmented generation (RAG), text mining & unstructured data analytics, trustworthy AI (robustness, reliability), evaluation & metrics for NLP/IR

Prof. Dr. Jihoon Moon
Email: jmoon25@duksung.ac.kr
Affiliation: Department of Data Science, Duksung Women's University, Seoul, Republic of Korea
Research Interests: trustworthy & explainable LLMs, explainable AI (XAI) for NLP/LLM Systems, LLM-based decision support and analytics, RAG & knowledge grounding, model evaluation (calibration, PR/F1 variants) on unstructured data, responsible AI (auditing, transparency)

Prof. Dr. Hyeonwoo Kim
Email: hwkim24@sch.ac.kr
Affiliation: Department of Computer Science and Engineering, Soonchunhyang University, Asan, Republic of Korea
Research Interests: LLMs and generative AI, multimodal foundation models (vision–language, VLMs), robust and secure LLM systems, interpretability for generative/multimodal models, data-centric evaluation for unstructured data, real-world applications of NLP/generative models

Large language models (LLMs) are quickly emerging as a key computational backbone for working with natural language—powering modern search, large-scale text analytics, and decision support over vast unstructured data. At the same time, when LLMs are introduced into data-intensive and engineering workflows, practical concerns such as explainability, reliability, and evaluation rigor often become the main bottlenecks for real deployment.
This Special Issue will collect recent progress on trustworthy and explainable LLMs, spanning both methodological advances and application-driven studies that align with CMC's interests in artificial intelligence and big data analytics. We invite submissions on (i) domain adaptation and efficient tuning strategies, (ii) grounding and verification approaches such as retrieval-augmented generation (RAG), (iii) robustness, safety, and security in LLM-based systems, and (iv) explainability methods that strengthen interpretability, transparency, and model auditing. In addition, we strongly encourage work on careful evaluation for unstructured-data tasks—covering improved metrics (including F1*-style variants), calibration, and systematic error analysis for transformer/LLM pipelines. We also welcome papers demonstrating credible LLM applications in areas such as information retrieval, GIS and text mining, industrial analytics, IoT/cyber logs, and scientific or engineering text workflows.
The suggested topics for this Special Issue include, but are not limited to:
· Efficient adaptation of LLMs (PEFT/LoRA, compression, edge/HPC deployment)
· Retrieval-augmented generation and knowledge grounding for reliable outputs
· Explainable LLMs (faithfulness, attribution, traceability, auditing)
· Rigorous evaluation for unstructured data (F1* variants, calibration, PR analysis)
· LLMs for information retrieval, search, and large-scale recommendation
· Security and privacy in LLM systems (prompt injection, data leakage, safe RAG)
· Domain-focused LLM applications (GIS, industrial analytics, IoT logs, scientific text mining)


Submit a Paper
Propose a Special lssue