Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (131)
  • Open Access

    REVIEW

    Transforming Healthcare with State-of-the-Art Medical-LLMs: A Comprehensive Evaluation of Current Advances Using Benchmarking Framework

    Himadri Nath Saha1, Dipanwita Chakraborty Bhattacharya2,*, Sancharita Dutta3, Arnab Bera3, Srutorshi Basuray4, Satyasaran Changdar5, Saptarshi Banerjee6, Jon Turdiev7

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-56, 2026, DOI:10.32604/cmc.2025.070507 - 09 December 2025

    Abstract The emergence of Medical Large Language Models has significantly transformed healthcare. Medical Large Language Models (Med-LLMs) serve as transformative tools that enhance clinical practice through applications in decision support, documentation, and diagnostics. This evaluation examines the performance of leading Med-LLMs, including GPT-4Med, Med-PaLM, MEDITRON, PubMedGPT, and MedAlpaca, across diverse medical datasets. It provides graphical comparisons of their effectiveness in distinct healthcare domains. The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making, documentation, drug discovery, research, patient interaction, and public health. The paper addresses deployment challenges of Medical-LLMs, More >

  • Open Access

    ARTICLE

    Individual Software Expertise Formalization and Assessment from Project Management Tool Databases

    Traian-Radu Ploscă1,*, Alexandru-Mihai Pescaru2, Bianca-Valeria Rus1, Daniel-Ioan Curiac1,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069707 - 10 November 2025

    Abstract Objective expertise evaluation of individuals, as a prerequisite stage for team formation, has been a long-term desideratum in large software development companies. With the rapid advancements in machine learning methods, based on reliable existing data stored in project management tools’ datasets, automating this evaluation process becomes a natural step forward. In this context, our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems. For this, we mathematically formalize two categories of expertise: technology-specific expertise, which denotes the skills required for a particular technology, and general expertise, which encapsulates overall knowledge More >

  • Open Access

    ARTICLE

    LLM-KE: An Ontology-Aware LLM Methodology for Military Domain Knowledge Extraction

    Yu Tao1, Ruopeng Yang1,2, Yongqi Wen1,*, Yihao Zhong1, Kaige Jiao1, Xiaolei Gu1,2

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-17, 2026, DOI:10.32604/cmc.2025.068670 - 10 November 2025

    Abstract Since Google introduced the concept of Knowledge Graphs (KGs) in 2012, their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition, extraction, representation, modeling, fusion, computation, and storage. Within this framework, knowledge extraction, as the core component, directly determines KG quality. In military domains, traditional manual curation models face efficiency constraints due to data fragmentation, complex knowledge architectures, and confidentiality protocols. Meanwhile, crowdsourced ontology construction approaches from general domains prove non-transferable, while human-crafted ontologies struggle with generalization deficiencies. To address these challenges, this study proposes an Ontology-Aware LLM Methodology for Military Domain More >

  • Open Access

    ARTICLE

    LinguTimeX a Framework for Multilingual CTC Detection Using Explainable AI and Natural Language Processing

    Omar Darwish1, Shorouq Al-Eidi2, Abdallah Al-Shorman1, Majdi Maabreh3, Anas Alsobeh4, Plamen Zahariev5, Yahya Tashtoush6,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-21, 2026, DOI:10.32604/cmc.2025.068266 - 10 November 2025

    Abstract Covert timing channels (CTC) exploit network resources to establish hidden communication pathways, posing significant risks to data security and policy compliance. Therefore, detecting such hidden and dangerous threats remains one of the security challenges. This paper proposes LinguTimeX, a new framework that combines natural language processing with artificial intelligence, along with explainable Artificial Intelligence (AI) not only to detect CTC but also to provide insights into the decision process. LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely. LinguTimeX demonstrates strong effectiveness in detecting CTC across… More >

  • Open Access

    ARTICLE

    A Keyword-Guided Training Approach to Large Language Models for Judicial Document Generation

    Yi-Ting Peng1,*, Chin-Laung Lei2

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3969-3992, 2025, DOI:10.32604/cmes.2025.073258 - 23 December 2025

    Abstract The rapid advancement of Large Language Models (LLMs) has enabled their application in diverse professional domains, including law. However, research on automatic judicial document generation remains limited, particularly for Taiwanese courts. This study proposes a keyword-guided training framework that enhances LLMs’ ability to generate structured and semantically coherent judicial decisions in Chinese. The proposed method first employs LLMs to extract representative legal keywords from absolute court judgments. Then it integrates these keywords into Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback using Proximal Policy Optimization (RLHF-PPO). Experimental evaluations using models such as Chinese Alpaca More >

  • Open Access

    ARTICLE

    LLM-Based Enhanced Clustering for Low-Resource Language: An Empirical Study

    Talha Farooq Khan1, Majid Hussain1, Muhammad Arslan2, Muhammad Saeed1, Lal Khan3,*, Hsien-Tsung Chang4,5,6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3883-3911, 2025, DOI:10.32604/cmes.2025.073021 - 23 December 2025

    Abstract Text clustering is an important task because of its vital role in NLP-related tasks. However, existing research on clustering is mainly based on the English language, with limited work on low-resource languages, such as Urdu. Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity. The primary aim of this paper is twofold: (1) By introducing a clustering dataset named UNC-2025 comprises 100k Urdu news documents, and (2) a detailed empirical standard of Large Language Model (LLM) improved clustering methods for Urdu text. We explicitly evaluate the… More >

  • Open Access

    REVIEW

    Deep Learning in Medical Image Analysis: A Comprehensive Review of Algorithms, Trends, Applications, and Challenges

    Dawa Chyophel Lepcha1,*, Bhawna Goyal2,3, Ayush Dogra4, Ahmed Alkhayyat5, Prabhat Kumar Sahu6, Aaliya Ali7, Vinay Kukreja4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 1487-1573, 2025, DOI:10.32604/cmes.2025.070964 - 26 November 2025

    Abstract Medical image analysis has become a cornerstone of modern healthcare, driven by the exponential growth of data from imaging modalities such as MRI, CT, PET, ultrasound, and X-ray. Traditional machine learning methods have made early contributions; however, recent advancements in deep learning (DL) have revolutionized the field, offering state-of-the-art performance in image classification, segmentation, detection, fusion, registration, and enhancement. This comprehensive review presents an in-depth analysis of deep learning methodologies applied across medical image analysis tasks, highlighting both foundational models and recent innovations. The article begins by introducing conventional techniques and their limitations, setting the… More >

  • Open Access

    ARTICLE

    Why Transformers Outperform LSTMs: A Comparative Study on Sarcasm Detection

    Palak Bari, Gurnur Bedi, Khushi Joshi, Anupama Jawale*

    Journal on Artificial Intelligence, Vol.7, pp. 499-508, 2025, DOI:10.32604/jai.2025.072531 - 17 November 2025

    Abstract This study investigates sarcasm detection in text using a dataset of 8095 sentences compiled from MUStARD and HuggingFace repositories, balanced across sarcastic and non-sarcastic classes. A sequential baseline model (LSTM) is compared with transformer-based models (RoBERTa and XLNet), integrated with attention mechanisms. Transformers were chosen for their proven ability to capture long-range contextual dependencies, whereas LSTM serves as a traditional benchmark for sequential modeling. Experimental results show that RoBERTa achieves 0.87 accuracy, XLNet 0.83, and LSTM 0.52. These findings confirm that transformer architectures significantly outperform recurrent models in sarcasm detection. Future work will incorporate multimodal More >

  • Open Access

    REVIEW

    Enhancing Security in Large Language Models: A Comprehensive Review of Prompt Injection Attacks and Defenses

    Eleena Sarah Mathew*

    Journal on Artificial Intelligence, Vol.7, pp. 347-363, 2025, DOI:10.32604/jai.2025.069841 - 06 October 2025

    Abstract This review paper explores advanced methods to prompt Large Language Models (LLMs) into generating objectionable or unintended behaviors through adversarial prompt injection attacks. We examine a series of novel projects like HOUYI, Robustly Aligned LLM (RA-LLM), StruQ, and Virtual Prompt Injection that compel LLMs to produce affirmative responses to harmful queries. Several new benchmarks, such as PromptBench, AdvBench, AttackEval, INJECAGENT, and RobustnessSuite, have been created to evaluate the performance and resilience of LLMs against these adversarial attacks. Results show significant success rates in misleading models like Vicuna-7B, LLaMA-2-7B-Chat, GPT-3.5, and GPT-4. The review highlights limitations… More >

  • Open Access

    REVIEW

    Natural Language Processing with Transformer-Based Models: A Meta-Analysis

    Charles Munyao*, John Ndia

    Journal on Artificial Intelligence, Vol.7, pp. 329-346, 2025, DOI:10.32604/jai.2025.069226 - 22 September 2025

    Abstract The natural language processing (NLP) domain has witnessed significant advancements with the emergence of transformer-based models, which have reshaped the text understanding and generation landscape. While their capabilities are well recognized, there remains a limited systematic synthesis of how these models perform across tasks, scale efficiently, adapt to domains, and address ethical challenges. Therefore, the aim of this paper was to analyze the performance of transformer-based models across various NLP tasks, their scalability, domain adaptation, and the ethical implications of such models. This meta-analysis paper synthesizes findings from 25 peer-reviewed studies on NLP transformer-based models,… More >

Displaying 1-10 on page 1 of 131. Per Page