Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (36)
  • Open Access

    ARTICLE

    Beyond Accuracy: Evaluating and Explaining the Capability Boundaries of Large Language Models in Syntax-Preserving Code Translation

    Yaxin Zhao1, Qi Han2, Hui Shu2, Yan Guang2,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-24, 2026, DOI:10.32604/cmc.2025.070511 - 09 December 2025

    Abstract Large Language Models (LLMs) are increasingly applied in the field of code translation. However, existing evaluation methodologies suffer from two major limitations: (1) the high overlap between test data and pretraining corpora, which introduces significant bias in performance evaluation; and (2) mainstream metrics focus primarily on surface-level accuracy, failing to uncover the underlying factors that constrain model capabilities. To address these issues, this paper presents TCode (Translation-Oriented Code Evaluation benchmark)—a complexity-controllable, contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework. The dataset is carefully designed to control complexity along multiple dimensions—including syntactic… More >

  • Open Access

    REVIEW

    Transforming Healthcare with State-of-the-Art Medical-LLMs: A Comprehensive Evaluation of Current Advances Using Benchmarking Framework

    Himadri Nath Saha1, Dipanwita Chakraborty Bhattacharya2,*, Sancharita Dutta3, Arnab Bera3, Srutorshi Basuray4, Satyasaran Changdar5, Saptarshi Banerjee6, Jon Turdiev7

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-56, 2026, DOI:10.32604/cmc.2025.070507 - 09 December 2025

    Abstract The emergence of Medical Large Language Models has significantly transformed healthcare. Medical Large Language Models (Med-LLMs) serve as transformative tools that enhance clinical practice through applications in decision support, documentation, and diagnostics. This evaluation examines the performance of leading Med-LLMs, including GPT-4Med, Med-PaLM, MEDITRON, PubMedGPT, and MedAlpaca, across diverse medical datasets. It provides graphical comparisons of their effectiveness in distinct healthcare domains. The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making, documentation, drug discovery, research, patient interaction, and public health. The paper addresses deployment challenges of Medical-LLMs, More >

  • Open Access

    ARTICLE

    PhishNet: A Real-Time, Scalable Ensemble Framework for Smishing Attack Detection Using Transformers and LLMs

    Abeer Alhuzali1,*, Qamar Al-Qahtani1, Asmaa Niyazi1, Lama Alshehri1, Fatemah Alharbi2

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-19, 2026, DOI:10.32604/cmc.2025.069491 - 10 November 2025

    Abstract The surge in smishing attacks underscores the urgent need for robust, real-time detection systems powered by advanced deep learning models. This paper introduces PhishNet, a novel ensemble learning framework that integrates transformer-based models (RoBERTa) and large language models (LLMs) (GPT-OSS 120B, LLaMA3.3 70B, and Qwen3 32B) to enhance smishing detection performance significantly. To mitigate class imbalance, we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques. Our system employs a dual-layer voting mechanism: weighted majority voting among LLMs and a final ensemble vote to classify messages as ham, spam, or smishing. Experimental More >

  • Open Access

    ARTICLE

    CAPGen: An MLLM-Based Framework Integrated with Iterative Optimization Mechanism for Cultural Artifacts Poster Generation

    Qianqian Hu, Chuhan Li, Mohan Zhang, Fang Liu*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-17, 2026, DOI:10.32604/cmc.2025.068225 - 10 November 2025

    Abstract Due to the digital transformation tendency among cultural institutions and the substantial influence of the social media platform, the demands of visual communication keep increasing for promoting traditional cultural artifacts online. As an effective medium, posters serve to attract public attention and facilitate broader engagement with cultural artifacts. However, existing poster generation methods mainly rely on fixed templates and manual design, which limits their scalability and adaptability to the diverse visual and semantic features of the artifacts. Therefore, we propose CAPGen, an automated aesthetic Cultural Artifacts Poster Generation framework built on a Multimodal Large Language More >

  • Open Access

    ARTICLE

    When Large Language Models and Machine Learning Meet Multi-Criteria Decision Making: Fully Integrated Approach for Social Media Moderation

    Noreen Fuentes1, Janeth Ugang1, Narcisan Galamiton1, Suzette Bacus1, Samantha Shane Evangelista2, Fatima Maturan2, Lanndon Ocampo2,3,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-26, 2026, DOI:10.32604/cmc.2025.068104 - 10 November 2025

    Abstract This study demonstrates a novel integration of large language models, machine learning, and multi-criteria decision-making to investigate self-moderation in small online communities, a topic under-explored compared to user behavior and platform-driven moderation on social media. The proposed methodological framework (1) utilizes large language models for social media post analysis and categorization, (2) employs k-means clustering for content characterization, and (3) incorporates the TODIM (Tomada de Decisão Interativa Multicritério) method to determine moderation strategies based on expert judgments. In general, the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation… More >

  • Open Access

    ARTICLE

    A Keyword-Guided Training Approach to Large Language Models for Judicial Document Generation

    Yi-Ting Peng1,*, Chin-Laung Lei2

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3969-3992, 2025, DOI:10.32604/cmes.2025.073258 - 23 December 2025

    Abstract The rapid advancement of Large Language Models (LLMs) has enabled their application in diverse professional domains, including law. However, research on automatic judicial document generation remains limited, particularly for Taiwanese courts. This study proposes a keyword-guided training framework that enhances LLMs’ ability to generate structured and semantically coherent judicial decisions in Chinese. The proposed method first employs LLMs to extract representative legal keywords from absolute court judgments. Then it integrates these keywords into Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback using Proximal Policy Optimization (RLHF-PPO). Experimental evaluations using models such as Chinese Alpaca More >

  • Open Access

    ARTICLE

    LLM-Based Enhanced Clustering for Low-Resource Language: An Empirical Study

    Talha Farooq Khan1, Majid Hussain1, Muhammad Arslan2, Muhammad Saeed1, Lal Khan3,*, Hsien-Tsung Chang4,5,6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3883-3911, 2025, DOI:10.32604/cmes.2025.073021 - 23 December 2025

    Abstract Text clustering is an important task because of its vital role in NLP-related tasks. However, existing research on clustering is mainly based on the English language, with limited work on low-resource languages, such as Urdu. Low-resource language text clustering has many drawbacks in the form of limited annotated collections and strong linguistic diversity. The primary aim of this paper is twofold: (1) By introducing a clustering dataset named UNC-2025 comprises 100k Urdu news documents, and (2) a detailed empirical standard of Large Language Model (LLM) improved clustering methods for Urdu text. We explicitly evaluate the… More >

  • Open Access

    REVIEW

    Binary Code Similarity Detection: Retrospective Review and Future Directions

    Shengjia Chang, Baojiang Cui*, Shaocong Feng

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4345-4374, 2025, DOI:10.32604/cmc.2025.070195 - 23 October 2025

    Abstract Binary Code Similarity Detection (BCSD) is vital for vulnerability discovery, malware detection, and software security, especially when source code is unavailable. Yet, it faces challenges from semantic loss, recompilation variations, and obfuscation. Recent advances in artificial intelligence—particularly natural language processing (NLP), graph representation learning (GRL), and large language models (LLMs)—have markedly improved accuracy, enabling better recognition of code variants and deeper semantic understanding. This paper presents a comprehensive review of 82 studies published between 1975 and 2025, systematically tracing the historical evolution of BCSD and analyzing the progressive incorporation of artificial intelligence (AI) techniques. Particular… More >

  • Open Access

    ARTICLE

    Image Enhancement Combined with LLM Collaboration for Low-Contrast Image Character Recognition

    Qin Qin1, Xuan Jiang1,*, Jinhua Jiang1, Dongfang Zhao1, Zimei Tu1, Zhiwei Shen2

    CMC-Computers, Materials & Continua, Vol.85, No.3, pp. 4849-4867, 2025, DOI:10.32604/cmc.2025.067919 - 23 October 2025

    Abstract The effectiveness of industrial character recognition on cast steel is often compromised by factors such as corrosion, surface defects, and low contrast, which hinder the extraction of reliable visual information. The problem is further compounded by the scarcity of large-scale annotated datasets and complex noise patterns in real-world factory environments. This makes conventional OCR techniques and standard deep learning models unreliable. To address these limitations, this study proposes a unified framework that integrates adaptive image preprocessing with collaborative reasoning among LLMs. A Biorthogonal 4.4 (bior4.4) wavelet transform is adaptively tuned using DE to enhance character… More >

  • Open Access

    REVIEW

    Enhancing Security in Large Language Models: A Comprehensive Review of Prompt Injection Attacks and Defenses

    Eleena Sarah Mathew*

    Journal on Artificial Intelligence, Vol.7, pp. 347-363, 2025, DOI:10.32604/jai.2025.069841 - 06 October 2025

    Abstract This review paper explores advanced methods to prompt Large Language Models (LLMs) into generating objectionable or unintended behaviors through adversarial prompt injection attacks. We examine a series of novel projects like HOUYI, Robustly Aligned LLM (RA-LLM), StruQ, and Virtual Prompt Injection that compel LLMs to produce affirmative responses to harmful queries. Several new benchmarks, such as PromptBench, AdvBench, AttackEval, INJECAGENT, and RobustnessSuite, have been created to evaluate the performance and resilience of LLMs against these adversarial attacks. Results show significant success rates in misleading models like Vicuna-7B, LLaMA-2-7B-Chat, GPT-3.5, and GPT-4. The review highlights limitations… More >

Displaying 1-10 on page 1 of 36. Per Page