Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (16)
  • Open Access

    ARTICLE

    Integration of Large Language Models (LLMs) and Static Analysis for Improving the Efficacy of Security Vulnerability Detection in Source Code

    José Armando Santas Ciavatta, Juan Ramón Bermejo Higuera*, Javier Bermejo Higuera, Juan Antonio Sicilia Montalvo, Tomás Sureda Riera, Jesús Pérez Melero

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.074566 - 12 January 2026

    Abstract As artificial Intelligence (AI) continues to expand exponentially, particularly with the emergence of generative pre-trained transformers (GPT) based on a transformer’s architecture, which has revolutionized data processing and enabled significant improvements in various applications. This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models (LLM). Our primary objective is to evaluate the effectiveness of Static Application Security Testing (SAST) by applying various techniques such as prompt persona, structure outputs and zero-shot. To the selection of the LLMs (CodeLlama 7B, DeepSeek coder 7B, Gemini 1.5 Flash,… More >

  • Open Access

    ARTICLE

    Beyond Accuracy: Evaluating and Explaining the Capability Boundaries of Large Language Models in Syntax-Preserving Code Translation

    Yaxin Zhao1, Qi Han2, Hui Shu2, Yan Guang2,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-24, 2026, DOI:10.32604/cmc.2025.070511 - 09 December 2025

    Abstract Large Language Models (LLMs) are increasingly applied in the field of code translation. However, existing evaluation methodologies suffer from two major limitations: (1) the high overlap between test data and pretraining corpora, which introduces significant bias in performance evaluation; and (2) mainstream metrics focus primarily on surface-level accuracy, failing to uncover the underlying factors that constrain model capabilities. To address these issues, this paper presents TCode (Translation-Oriented Code Evaluation benchmark)—a complexity-controllable, contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework. The dataset is carefully designed to control complexity along multiple dimensions—including syntactic… More >

  • Open Access

    REVIEW

    Transforming Healthcare with State-of-the-Art Medical-LLMs: A Comprehensive Evaluation of Current Advances Using Benchmarking Framework

    Himadri Nath Saha1, Dipanwita Chakraborty Bhattacharya2,*, Sancharita Dutta3, Arnab Bera3, Srutorshi Basuray4, Satyasaran Changdar5, Saptarshi Banerjee6, Jon Turdiev7

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-56, 2026, DOI:10.32604/cmc.2025.070507 - 09 December 2025

    Abstract The emergence of Medical Large Language Models has significantly transformed healthcare. Medical Large Language Models (Med-LLMs) serve as transformative tools that enhance clinical practice through applications in decision support, documentation, and diagnostics. This evaluation examines the performance of leading Med-LLMs, including GPT-4Med, Med-PaLM, MEDITRON, PubMedGPT, and MedAlpaca, across diverse medical datasets. It provides graphical comparisons of their effectiveness in distinct healthcare domains. The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making, documentation, drug discovery, research, patient interaction, and public health. The paper addresses deployment challenges of Medical-LLMs, More >

  • Open Access

    ARTICLE

    PhishNet: A Real-Time, Scalable Ensemble Framework for Smishing Attack Detection Using Transformers and LLMs

    Abeer Alhuzali1,*, Qamar Al-Qahtani1, Asmaa Niyazi1, Lama Alshehri1, Fatemah Alharbi2

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-19, 2026, DOI:10.32604/cmc.2025.069491 - 10 November 2025

    Abstract The surge in smishing attacks underscores the urgent need for robust, real-time detection systems powered by advanced deep learning models. This paper introduces PhishNet, a novel ensemble learning framework that integrates transformer-based models (RoBERTa) and large language models (LLMs) (GPT-OSS 120B, LLaMA3.3 70B, and Qwen3 32B) to enhance smishing detection performance significantly. To mitigate class imbalance, we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques. Our system employs a dual-layer voting mechanism: weighted majority voting among LLMs and a final ensemble vote to classify messages as ham, spam, or smishing. Experimental More >

  • Open Access

    ARTICLE

    LLM-Based Enhanced Clustering for Low-Resource Language: An Empirical Study

    Talha Farooq Khan1, Majid Hussain1, Muhammad Arslan2, Muhammad Saeed1, Lal Khan3,*, Hsien-Tsung Chang4,5,6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3883-3911, 2025, DOI:10.32604/cmes.2025.073021 - 23 December 2025

    Abstract Text clustering is an important task because of its vital role in NLP-related tasks. However, existing research on clustering is mainly based on the English language, with limited work on low-resource languages, such as Urdu. Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity. The primary aim of this paper is twofold: (1) By introducing a clustering dataset named UNC-2025 comprises 100k Urdu news documents, and (2) a detailed empirical standard of Large Language Model (LLM) improved clustering methods for Urdu text. We explicitly evaluate the… More >

  • Open Access

    ARTICLE

    Image Enhancement Combined with LLM Collaboration for Low-Contrast Image Character Recognition

    Qin Qin1, Xuan Jiang1,*, Jinhua Jiang1, Dongfang Zhao1, Zimei Tu1, Zhiwei Shen2

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4849-4867, 2025, DOI:10.32604/cmc.2025.067919 - 23 October 2025

    Abstract The effectiveness of industrial character recognition on cast steel is often compromised by factors such as corrosion, surface defects, and low contrast, which hinder the extraction of reliable visual information. The problem is further compounded by the scarcity of large-scale annotated datasets and complex noise patterns in real-world factory environments. This makes conventional OCR techniques and standard deep learning models unreliable. To address these limitations, this study proposes a unified framework that integrates adaptive image preprocessing with collaborative reasoning among LLMs. A Biorthogonal 4.4 (bior4.4) wavelet transform is adaptively tuned using DE to enhance character… More >

  • Open Access

    REVIEW

    Enhancing Security in Large Language Models: A Comprehensive Review of Prompt Injection Attacks and Defenses

    Eleena Sarah Mathew*

    Journal on Artificial Intelligence, Vol.7, pp. 347-363, 2025, DOI:10.32604/jai.2025.069841 - 06 October 2025

    Abstract This review paper explores advanced methods to prompt Large Language Models (LLMs) into generating objectionable or unintended behaviors through adversarial prompt injection attacks. We examine a series of novel projects like HOUYI, Robustly Aligned LLM (RA-LLM), StruQ, and Virtual Prompt Injection that compel LLMs to produce affirmative responses to harmful queries. Several new benchmarks, such as PromptBench, AdvBench, AttackEval, INJECAGENT, and RobustnessSuite, have been created to evaluate the performance and resilience of LLMs against these adversarial attacks. Results show significant success rates in misleading models like Vicuna-7B, LLaMA-2-7B-Chat, GPT-3.5, and GPT-4. The review highlights limitations… More >

  • Open Access

    ARTICLE

    Redefining the Programmer: Human-AI Collaboration, LLMs, and Security in Modern Software Engineering

    Elyson De La Cruz*, Hanh Le, Karthik Meduri, Geeta Sandeep Nadella*, Hari Gonaygunta

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3569-3582, 2025, DOI:10.32604/cmc.2025.068137 - 23 September 2025

    Abstract The rapid integration of artificial intelligence (AI) into software development, driven by large language models (LLMs), is reshaping the role of programmers from traditional coders into strategic collaborators within Industry 4.0 ecosystems. This qualitative study employs a hermeneutic phenomenological approach to explore the lived experiences of Information Technology (IT) professionals as they navigate a dynamic technological landscape marked by intelligent automation, shifting professional identities, and emerging ethical concerns. Findings indicate that developers are actively adapting to AI-augmented environments by engaging in continuous upskilling, prompt engineering, interdisciplinary collaboration, and heightened ethical awareness. However, participants also voiced… More >

  • Open Access

    ARTICLE

    Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

    Nouman Ahmad*, Changsheng Zhang

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3321-3334, 2025, DOI:10.32604/cmc.2025.067044 - 23 September 2025

    Abstract Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a… More >

  • Open Access

    REVIEW

    Beyond Intentions: A Critical Survey of Misalignment in LLMs

    Yubin Qu1,2, Song Huang2,*, Long Li3, Peng Nie2, Yongming Yao2

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 249-300, 2025, DOI:10.32604/cmc.2025.067750 - 29 August 2025

    Abstract Large language models (LLMs) represent significant advancements in artificial intelligence. However, their increasing capabilities come with a serious challenge: misalignment, which refers to the deviation of model behavior from the designers’ intentions and human values. This review aims to synthesize the current understanding of the LLM misalignment issue and provide researchers and practitioners with a comprehensive overview. We define the concept of misalignment and elaborate on its various manifestations, including generating harmful content, factual errors (hallucinations), propagating biases, failing to follow instructions, emerging deceptive behaviors, and emergent misalignment. We explore the multifaceted causes of misalignment,… More >

Displaying 1-10 on page 1 of 16. Per Page