Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (22)
  • Open Access

    ARTICLE

    H-LoRA: Rethinking Rank Selection for Controllable Knowledge Retention in Edge AI

    Darren Chai Xin Lun, Lim Tong Ming*

    CMC-Computers, Materials & Continua, Vol.88, No.1, 2026, DOI:10.32604/cmc.2026.080068 - 08 May 2026

    Abstract The deployment of specialized language models in resource-constrained edge environments (1B parameters, 2 GB memory, 100 ms latency) faces a critical challenge: Supervised Fine-Tuning (SFT) achieves domain expertise but suffers from irreversible catastrophic forgetting, while traditional Low-Rank Adaptation (LoRA) with conservative ranks (r  64) often underperforms due to insufficient adaptation capacity. This work introduces H-LoRA (High-Rank LoRA) for edge-deployable models and establishes a fundamental distinction between destructive forgetting and controllable knowledge retention. Through comprehensive experiments on compact models (0.12B Minimind and Qwen-0.5B) across three domains (Human Resources, Medical, Mathematics) using 29,647 samples, we… More >

  • Open Access

    ARTICLE

    Effective Data Balancing and Fine-Tuning Techniques for Medical sLLMs in Resource-Constrained Domains

    Seohyun Yoo, Joonseo Hyeon, Jaehyuk Cho*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.077579 - 09 April 2026

    Abstract Despite remarkable advances in medical large language models (LLMs), their deployment in real clinical settings remains impractical due to prohibitive computational requirements and privacy regulations that restrict cloud-based solutions. Small LLMs (sLLMs) offer a promising alternative for on-premise deployment, yet they require domain-specific fine-tuning that still exceeds the hardware capacity of most healthcare institutions. Furthermore, the impact of multilingual data composition on medical sLLM performance remains poorly understood. We present a resource-efficient fine-tuning pipeline that integrates Quantized Low-Rank Adaptation (QLoRA), Fully Sharded Data Parallelism (FSDP), and Sequence Packing, validated across two model scales: MedGemma 4B… More >

  • Open Access

    ARTICLE

    Evaluating Spanish Medical Entity Recognition: Large Language Models with Prompting versus Fine-Tuning

    Ronghao Pan1, Tomás Bernal-Beltrán1, Alejandro Rodríguez-González2,3, Ernestina Menasalvas-Ruíz2,3, Rafael Valencia-García1,*

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.077501 - 09 April 2026

    Abstract The digitization of healthcare has resulted in the production of large amounts of structured and unstructured clinical data, creating the need for accurate and efficient named entity recognition (NER) to support medical procedures. This study evaluates and compares three approaches to NER in the medical domain in Spanish: using Large Language Models (LLMs) with In-Context Learning techniques (Zero-Shot, Few-Shot, and Chain-of-Thought); fine-tuning of LLMs; and fine-tuning of encoder-only models. Experiments were conducted on the Meddocan, Meddoprof, Meddoplace and Symptemist benchmark datasets. Fine-tuned encoder-only models achieve the best performance across all datasets, reaching macro-F1 scores of More >

  • Open Access

    ARTICLE

    Tests and Refinement of a Mini-Power Plant with a Piston Engine Powered by Propane-Butane Blend and Syngas

    Leonid Plotnikov1,*, Leonid Osipov1, Danil Davydov1, Dmitry Krasilnikov1, Alexander Ryzhkov2

    Energy Engineering, Vol.123, No.4, 2026, DOI:10.32604/ee.2026.076278 - 27 March 2026

    Abstract The use of alternative fuels to generate mechanical and thermal energy in engines is a promising and sought-after technological area with its own unique advantages and characteristics. Consequently, enhancing the technical, economic, and environmental efficiency of gas engines fueled by propane-butane mixture and syngas through optimized operating cycle parameters (including valve timing, ignition timing angle, fuel mixture composition, and compression ratio) is a pressing imperative for scientific and energy sectors. The aim of the study was to investigate and compare the performance of an engine with different compression ratios running on a propane-butane mixture and… More >

  • Open Access

    ARTICLE

    TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity

    Anfeng Yang, Fei Kang, Wenjuan Bu*

    CMC-Computers, Materials & Continua, Vol.87, No.2, 2026, DOI:10.32604/cmc.2025.073979 - 12 March 2026

    Abstract Large language models (LLMs) have demonstrated significant capabilities in semantic understanding and code generation. However, cybersecurity tasks often require prompting the adaptation of open-source models to this domain. Despite their effectiveness, large-parameter LLMs incur substantial memory usage and runtime costs during task inference and downstream fine-tuning for cybersecurity applications. In this study, we fine-tuned six LLMs with parameters under 4 billion using LoRA (Low-Rank Adaptation) on specific cybersecurity instruction datasets, employing evaluation metrics similar to Hackmentor. Results indicate that post-fine-tuning, smaller models achieved victory or parity rates up to 85% against larger models like Qwen-1.5-14B… More >

  • Open Access

    ARTICLE

    Unlocking Edge Fine-Tuning: A Sample-Efficient Language-Empowered Split Fine-Tuning Framework

    Zuyi Huang1, Yue Wang1, Jia Liu2, Haodong Yi1, Lejun Ai1, Min Chen1,3,*, Salman A. AlQahtani4

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.074034 - 10 February 2026

    Abstract The personalized fine-tuning of large language models (LLMs) on edge devices is severely constrained by limited computation resources. Although split federated learning alleviates on-device burdens, its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning, which leads to excessive communication overhead. To address this, we propose Language-Empowered Split Fine-Tuning (LESFT), a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm. LESFT simultaneously learns from multiple logically equivalent but linguistically diverse reasoning chains, providing richer supervisory signals and improving data efficiency. This process-oriented training allows more effective reasoning More >

  • Open Access

    ARTICLE

    Detection of Maliciously Disseminated Hate Speech in Spanish Using Fine-Tuning and In-Context Learning Techniques with Large Language Models

    Tomás Bernal-Beltrán1, Ronghao Pan1, José Antonio García-Díaz1, María del Pilar Salas-Zárate2, Mario Andrés Paredes-Valverde2, Rafael Valencia-García1,*

    CMC-Computers, Materials & Continua, Vol.87, No.1, 2026, DOI:10.32604/cmc.2025.073629 - 10 February 2026

    Abstract The malicious dissemination of hate speech via compromised accounts, automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern. Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources. In this paper, we compare two predominant AI-based approaches for the forensic detection of malicious hate speech: (1) fine-tuning encoder-only models that have been trained in Spanish and (2) In-Context Learning techniques (Zero- and Few-Shot Learning) with large-scale language models. Our approach goes beyond binary classification, proposing a comprehensive, multidimensional evaluation that labels each… More >

  • Open Access

    ARTICLE

    DyLoRA-TAD: Dynamic Low-Rank Adapter for End-to-End Temporal Action Detection

    Jixin Wu1,2, Mingtao Zhou2,3, Di Wu2,3, Wenqi Ren4, Jiatian Mei2,3, Shu Zhang1,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.072964 - 12 January 2026

    Abstract End-to-end Temporal Action Detection (TAD) has achieved remarkable progress in recent years, driven by innovations in model architectures and the emergence of Video Foundation Models (VFMs). However, existing TAD methods that perform full fine-tuning of pretrained video models often incur substantial computational costs, which become particularly pronounced when processing long video sequences. Moreover, the need for precise temporal boundary annotations makes data labeling extremely expensive. In low-resource settings where annotated samples are scarce, direct fine-tuning tends to cause overfitting. To address these challenges, we introduce Dynamic Low-Rank Adapter (DyLoRA), a lightweight fine-tuning framework tailored specifically… More >

  • Open Access

    REVIEW

    Transforming Healthcare with State-of-the-Art Medical-LLMs: A Comprehensive Evaluation of Current Advances Using Benchmarking Framework

    Himadri Nath Saha1, Dipanwita Chakraborty Bhattacharya2,*, Sancharita Dutta3, Arnab Bera3, Srutorshi Basuray4, Satyasaran Changdar5, Saptarshi Banerjee6, Jon Turdiev7

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-56, 2026, DOI:10.32604/cmc.2025.070507 - 09 December 2025

    Abstract The emergence of Medical Large Language Models has significantly transformed healthcare. Medical Large Language Models (Med-LLMs) serve as transformative tools that enhance clinical practice through applications in decision support, documentation, and diagnostics. This evaluation examines the performance of leading Med-LLMs, including GPT-4Med, Med-PaLM, MEDITRON, PubMedGPT, and MedAlpaca, across diverse medical datasets. It provides graphical comparisons of their effectiveness in distinct healthcare domains. The study introduces a domain-specific categorization system that aligns these models with optimal applications in clinical decision-making, documentation, drug discovery, research, patient interaction, and public health. The paper addresses deployment challenges of Medical-LLMs, More >

  • Open Access

    ARTICLE

    A Real-Time Deep Learning Approach for Electrocardiogram-Based Cardiovascular Disease Prediction with Adaptive Drift Detection and Generative Feature Replay

    Soumia Zertal1,2,*, Asma Saighi1,2, Sofia Kouah1,2, Souham Meshoul3,*, Zakaria Laboudi2,4

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.3, pp. 3737-3782, 2025, DOI:10.32604/cmes.2025.068558 - 30 September 2025

    Abstract Cardiovascular diseases (CVDs) continue to present a leading cause of mortality worldwide, emphasizing the importance of early and accurate prediction. Electrocardiogram (ECG) signals, central to cardiac monitoring, have increasingly been integrated with Deep Learning (DL) for real-time prediction of CVDs. However, DL models are prone to performance degradation due to concept drift and to catastrophic forgetting. To address this issue, we propose a real-time CVDs prediction approach, referred to as ADWIN-GFR that combines Convolutional Neural Network (CNN) layers, for spatial feature extraction, with Gated Recurrent Units (GRU), for temporal modeling, alongside adaptive drift detection and… More > Graphic Abstract

    A Real-Time Deep Learning Approach for Electrocardiogram-Based Cardiovascular Disease Prediction with Adaptive Drift Detection and Generative Feature Replay

Displaying 1-10 on page 1 of 22. Per Page