Uddagiri Sirisha1,*, Chanumolu Kiran Kumar2, Revathi Durgam3, Poluru Eswaraiah4, G Muni Nagamani5
CMC-Computers, Materials & Continua, Vol.83, No.3, pp. 4031-4059, 2025, DOI:10.32604/cmc.2025.063721
- 19 May 2025
Abstract A complete examination of Large Language Models’ strengths, problems, and applications is needed due to their rising use across disciplines. Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance, strengths, and weaknesses. This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies. In this research, 50 studies on 25+ LLMs, including GPT-3, GPT-4, Claude 3.5, DeepKet, and hybrid multimodal frameworks like ContextDET and GeoRSCLIP, are thoroughly reviewed. We propose LLM application taxonomy by grouping techniques by task focus—healthcare,… More >