Special Issues
Table of Content

Large Language Models: Foundations, Advances, and Emerging Applications

Submission Deadline: 31 January 2027 View: 132 Submit to Special Issue

Guest Editor(s)

Dr. Shuai Zhao

Email: shuai.zhao@ntu.edu.sg

Affiliation: College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore

Homepage:

Research Interests: large language models (LLMs), model applications, model safety, and intelligent healthcare

图片4.png


Dr. Luwei Xiao

Email: luwei.xiao@nus.edu.sg

Affiliation: College of Design and Engineering, National University of Singapore, Singapore, Singapore

Homepage:

Research Interests: large language models (LLMs), affective computing, multimodal learning, and AI for healthcare

图片5.png


Dr. Tiesunlong Shen

Email: tiesunlong@nus.edu.sg

Affiliation: National University of Singapore, Singapore, Singapore

Homepage:

Research Interests: natural language processing, large language model (LLM) reasoning, reinforcement learning, graph mining, and AI agents

图片6.png


Dr. Jianzhu Bao

Email: jianzhu.bao@ntu.edu.sg

Affiliation: Nanyang Technological University, Singapore

Homepage:

Research Interests: large language models, computational argumentation, multimodal learning

图片7.png


Summary

Large Language Models (LLMs) have rapidly become a transformative foundation across computational science, engineering, and intelligent systems. This Special Issue aims to bring together recent advances in model architectures, training paradigms, and scalable algorithms, alongside emerging directions in AI for Science, including but not limited to applications in vehicle scheduling, healthcare, and physical sciences.

In parallel, the issue will highlight crucial topics in model robustness, security, interpretability, and responsible AI. By integrating methodological breakthroughs with domain-driven applications, this Special Issue provides a comprehensive platform for researchers to explore the potential, limitations, and future directions of LLM-powered computational technologies.

Suggested Subtopics:
· LLM architectures, training, and scalable algorithms
· Domain-specific applications of LLMs
· AI for Science in biomedicine and physical sciences
· Robustness, security, and reliability of LLMs
· Interpretability and responsible AI


Keywords

large language models, AI for science, model security & robustness, multimodal modeling, simulation and prediction

Share Link