Special Issues
Table of Content

Generative Artificial Intelligence and Large Language Models: Methods, Architectures, and Applications

Submission Deadline: 31 July 2026 View: 128 Submit to Special Issue

Guest Editors

Dr. Junaid Baber

Email: junaid.baber@univ-grenoble-alpes.fr

Affiliation: IMAG, University of Grenoble Alpes, Saint-Martin-d'Hères, France

Homepage:

Research Interests: AI, LLM, machine learning


Assoc. Prof. Farhan Aadil

Email: farhan.aadil@sivas.edu.tr

Affiliation: Computer Engineering Department, Sivas University of Science and Technology, Sivas, Turkey

Homepage:

Research Interests: optimization, machine learning, AI-based applications


Summary

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are rapidly transforming the field of artificial intelligence. Advances in transformer-based architectures, foundation models, multimodal learning, and large-scale self-supervised training have enabled AI systems to generate high-quality text, images, audio, video, and code. These developments are reshaping research, industry, and society by enabling new forms of automation, creativity, decision-making, and human–AI collaboration.


This Special Issue aims to bring together cutting-edge research contributions that advance the theoretical foundations, methodological innovations, architectural design, evaluation strategies, and real-world deployment of generative AI and LLMs. We invite high-quality original research articles, reviews, and application-driven studies that explore emerging challenges and opportunities in this rapidly evolving domain.


Particular emphasis will be placed on robust, efficient, ethical, and scalable AI systems capable of addressing real-world problems across diverse domains.


Topics of Interest (but not limited to):
· Novel architectures for generative models and large language models
· Foundation models and large-scale pretraining strategies
· Efficient training, fine-tuning, and parameter-efficient adaptation methods
· Multimodal generative models (text–image–audio–video integration)
· Retrieval-augmented generation (RAG) and knowledge-enhanced LLMs
· Alignment, safety, and responsible AI in generative systems
· Explainability and interpretability of large-scale models
· Robustness, fairness, bias mitigation, and trustworthy AI
· Federated and distributed learning for generative models
· Edge deployment and resource-efficient LLMs
· AI agents and autonomous decision-making systems


Keywords

generative artificial intelligence, large language models (LLMs), foundation models, transformer architectures, multimodal learning, self-supervised learning, AI alignment and safety, retrieval-augmented generation (RAG), efficient model fine-tuning, intelligent systems applications, agentic AI, small language models (SLMs)

Share Link