Special Issues
Table of Content

Enhancing AI Applications through NLP and LLM Integration

Submission Deadline: 30 September 2025 View: 1247 Submit to Special Issue

Guest Editors

Assoc. Prof. Jia-Wei Chang 

Email: jwchang@nutc.edu.tw

Affiliation: National Taichung University of Science and Technology, Taichung, 400, Taiwan

Homepage:

Research Interests: natural language processing, internet of things, artificial intelligence, data mining, and e-learning technologies

图片5.png


Assoc. Prof. Chih-Chieh Hung

Email: smalloshin@nchu.edu.tw

Affiliation: National Chung Hsing University, Taichung, 400, Taiwan

Homepage:

Research Interests: intelligent traffic systems, AI-empowered Design, Fintech, spatiotemporal database, big data anayltics, deep reinforcement learning, data mining and artificial intelligence

图片6.png


Summary

The rapid evolution of artificial intelligence (AI) has been significantly propelled by advancements in Natural Language Processing (NLP) and the development of Large Language Models (LLMs). With its focus on algorithmic modeling and language-specific tasks, NLP has been instrumental in refining AI’s ability to understand and process human language. Meanwhile, LLMs, with their vast pre-training on extensive datasets, have brought about unprecedented contextual understanding and generative capabilities. As these two fields mature, their integration presents an unparalleled opportunity to enhance AI systems, enabling them to perform more complex tasks with higher accuracy, efficiency, and adaptability. Integrating NLP with LLMs represents a significant leap forward in developing advanced language processing systems. By merging NLP’s precision in handling specific language tasks with LLM’s expansive contextual understanding, this synergy promises to enhance the accuracy, efficiency, and adaptability of AI systems.


This special issue seeks to delve into the synergistic potential of combining NLP and LLM technologies to push the boundaries of what AI can achieve across various industries. We invite researchers, practitioners, and industry experts to submit original research papers that explore the following (but not limited to) topics:

• Enhanced Accuracy and Contextual Understanding: Explore how integrating NLP and LLMs improves accuracy in language tasks like understanding, generation, and translation.

• Resource Optimization: Examine strategies for optimizing or reducing computational resources by leveraging NLP and LLMs for more efficient solutions.

• Flexibility and Adaptability in AI Applications: Investigate how combining NLP and LLMs enhances AI systems' flexibility and responsiveness to evolving needs.

• Real-World Integration and Case Studies: Develop NLP and LLM integrations in sectors like healthcare, highlighting improvements in performance and satisfaction.

• Ethical Considerations and Bias Mitigation: Research on addressing ethical challenges and mitigating biases in AI systems that integrate NLP and LLMs, ensuring fairness and transparency in AI-driven decisions.

• Cross-Language and Multilingual Applications: Studies on the application of NLP and LLM integration in multilingual settings, improving cross-language understanding and translation.

• User-Centric AI Interactions: Research on how NLP and LLM integration can be leveraged to create more intuitive and user-friendly AI interfaces, improving user experience and engagement.

• Security and Privacy in NLP and LLM Integration: Studies on ensuring the security and privacy of data in AI systems that integrate NLP and LLMs, particularly in sensitive applications like healthcare and finance.

• Innovations in Language Understanding for IoT and Edge Computing: Research on applying NLP and LLM integration to IoT and edge computing environments, enhancing language understanding in decentralized systems.

• Future Directions and Predictive Studies: Explore the future potential of NLP and LLMs, focusing on advancements in AI assistants, content creation, and robotics.


Keywords

NLP, LLMs, Resource Optimization, AI, IoT and Edge Computing

Published Papers


  • Open Access

    ARTICLE

    Upholding Academic Integrity amidst Advanced Language Models: Evaluating BiLSTM Networks with GloVe Embeddings for Detecting AI-Generated Scientific Abstracts

    Lilia-Eliana Popescu-Apreutesei, Mihai-Sorin Iosupescu, Sabina Cristiana Necula, Vasile-Daniel Păvăloaia
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2025.064747
    (This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)
    Abstract The increasing fluency of advanced language models, such as GPT-3.5, GPT-4, and the recently introduced DeepSeek, challenges the ability to distinguish between human-authored and AI-generated academic writing. This situation is raising significant concerns regarding the integrity and authenticity of academic work. In light of the above, the current research evaluates the effectiveness of Bidirectional Long Short-Term Memory (BiLSTM) networks enhanced with pre-trained GloVe (Global Vectors for Word Representation) embeddings to detect AI-generated scientific abstracts drawn from the AI-GA (Artificial Intelligence Generated Abstracts) dataset. Two core BiLSTM variants were assessed: a single-layer approach and a dual-layer… More >

  • Open Access

    ARTICLE

    A Convolutional Neural Network Based Optical Character Recognition for Purely Handwritten Characters and Digits

    Syed Atir Raza, Muhammad Shoaib Farooq, Uzma Farooq, Hanen Karamti , Tahir Khurshaid, Imran Ashraf
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2025.063255
    (This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)
    Abstract Urdu, a prominent subcontinental language, serves as a versatile means of communication. However, its handwritten expressions present challenges for optical character recognition (OCR). While various OCR techniques have been proposed, most of them focus on recognizing printed Urdu characters and digits. To the best of our knowledge, very little research has focused solely on Urdu pure handwriting recognition, and the results of such proposed methods are often inadequate. In this study, we introduce a novel approach to recognizing Urdu pure handwritten digits and characters using Convolutional Neural Networks (CNN). Our proposed method utilizes convolutional layers… More >

  • Open Access

    ARTICLE

    Large Language Model in Healthcare for the Prediction of Genetic Variants from Unstructured Text Medicine Data Using Natural Language Processing

    Noor Ayesha, Muhammad Mujahid, Abeer Rashad Mirdad, Faten S. Alamri, Amjad R. Khan
    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1883-1899, 2025, DOI:10.32604/cmc.2025.063560
    (This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)
    Abstract Large language models (LLMs) and natural language processing (NLP) have significant promise to improve efficiency and refine healthcare decision-making and clinical results. Numerous domains, including healthcare, are rapidly adopting LLMs for the classification of biomedical textual data in medical research. The LLM can derive insights from intricate, extensive, unstructured training data. Variants need to be accurately identified and classified to advance genetic research, provide individualized treatment, and assist physicians in making better choices. However, the sophisticated and perplexing language of medical reports is often beyond the capabilities of the devices we now utilize. Such an… More >

  • Open Access

    REVIEW

    An Analytical Review of Large Language Models Leveraging KDGI Fine-Tuning, Quantum Embedding’s, and Multimodal Architectures

    Uddagiri Sirisha, Chanumolu Kiran Kumar, Revathi Durgam, Poluru Eswaraiah, G Muni Nagamani
    CMC-Computers, Materials & Continua, Vol.83, No.3, pp. 4031-4059, 2025, DOI:10.32604/cmc.2025.063721
    (This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)
    Abstract A complete examination of Large Language Models’ strengths, problems, and applications is needed due to their rising use across disciplines. Current studies frequently focus on single-use situations and lack a comprehensive understanding of LLM architectural performance, strengths, and weaknesses. This gap precludes finding the appropriate models for task-specific applications and limits awareness of emerging LLM optimization and deployment strategies. In this research, 50 studies on 25+ LLMs, including GPT-3, GPT-4, Claude 3.5, DeepKet, and hybrid multimodal frameworks like ContextDET and GeoRSCLIP, are thoroughly reviewed. We propose LLM application taxonomy by grouping techniques by task focus—healthcare,… More >

  • Open Access

    ARTICLE

    Optimizing Airline Review Sentiment Analysis: A Comparative Analysis of LLaMA and BERT Models through Fine-Tuning and Few-Shot Learning

    Konstantinos I. Roumeliotis, Nikolaos D. Tselikas, Dimitrios K. Nasiopoulos
    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2769-2792, 2025, DOI:10.32604/cmc.2025.059567
    (This article belongs to the Special Issue: Enhancing AI Applications through NLP and LLM Integration)
    Abstract In the rapidly evolving landscape of natural language processing (NLP) and sentiment analysis, improving the accuracy and efficiency of sentiment classification models is crucial. This paper investigates the performance of two advanced models, the Large Language Model (LLM) LLaMA model and NLP BERT model, in the context of airline review sentiment analysis. Through fine-tuning, domain adaptation, and the application of few-shot learning, the study addresses the subtleties of sentiment expressions in airline-related text data. Employing predictive modeling and comparative analysis, the research evaluates the effectiveness of Large Language Model Meta AI (LLaMA) and Bidirectional Encoder… More >

Share Link