Open Access iconOpen Access

REVIEW

An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures

Uddagiri Sirisha1,*, Chanumolu Kiran Kumar2, Revathi Durgam3, Poluru Eswaraiah4, G Muni Nagamani5

1 Department of Computer Science and Engineering, Prasad V Potluri Siddhartha Institute of Technology, Vijayawada, 520007, India
2 Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, 522302, India
3 Department of Computer Science and Engineering (Data Science), AVN Institute of Engineering and Technology, Hyderabad, 501510, India
4 Department of Computer Science and Engineering (Data Science), Vignan’s Institute of Management and Technology for Women, Hyderabad, 501301, India
5 Department of Computer Science and Engineering, Andhra Loyola Institute of Engineering and Technology, Vijayawada, 520008, India

* Corresponding Author: Uddagiri Sirisha. Email: email

(This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)

Computers, Materials & Continua 2025, 83(3), 4031-4059. https://doi.org/10.32604/cmc.2025.063721

Abstract

A complete examination of Large Language Models’ strengths, problems, and applications is needed due to their rising use across disciplines. Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance, strengths, and weaknesses. This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies. In this research, 50 studies on 25+ LLMs, including GPT-3, GPT-4, Claude 3.5, DeepKet, and hybrid multimodal frameworks like ContextDET and GeoRSCLIP, are thoroughly reviewed. We propose LLM application taxonomy by grouping techniques by task focus—healthcare, chemistry, sentiment analysis, agent-based simulations, and multimodal integration. Advanced methods like parameter-efficient tuning (LoRA), quantum-enhanced embeddings (DeepKet), retrieval-augmented generation (RAG), and safety-focused models (GalaxyGPT) are evaluated for dataset requirements, computational efficiency, and performance measures. Frameworks for ethical issues, data limited hallucinations, and KDGI-enhanced fine-tuning like Woodpecker’s post-remedy corrections are highlighted. The investigation’s scope, mad, and methods are described, but the primary results are not. The work reveals that domain-specialized fine-tuned LLMs employing RAG and quantum-enhanced embeddings perform better for context-heavy applications. In medical text normalization, ChatGPT-4 outperforms previous models, while two multimodal frameworks, GeoRSCLIP, increase remote sensing. Parameter-efficient tuning technologies like LoRA have minimal computing cost and similar performance, demonstrating the necessity for adaptive models in multiple domains. To discover the optimum domain-specific models, explain domain-specific fine-tuning, and present quantum and multimodal LLMs to address scalability and cross-domain issues. The framework helps academics and practitioners identify, adapt, and innovate LLMs for different purposes. This work advances the field of efficient, interpretable, and ethical LLM application research.

Keywords

Large language models; quantum embeddings; fine-tuning techniques; multimodal architectures; ethical AI; scenarios

Cite This Article

APA Style
Sirisha, U., Kumar, C.K., Durgam, R., Eswaraiah, P., Nagamani, G.M. (2025). An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures. Computers, Materials & Continua, 83(3), 4031–4059. https://doi.org/10.32604/cmc.2025.063721
Vancouver Style
Sirisha U, Kumar CK, Durgam R, Eswaraiah P, Nagamani GM. An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures. Comput Mater Contin. 2025;83(3):4031–4059. https://doi.org/10.32604/cmc.2025.063721
IEEE Style
U. Sirisha, C. K. Kumar, R. Durgam, P. Eswaraiah, and G. M. Nagamani, “An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures,” Comput. Mater. Contin., vol. 83, no. 3, pp. 4031–4059, 2025. https://doi.org/10.32604/cmc.2025.063721



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1134

    View

  • 478

    Download

  • 0

    Like

Share Link