Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.073979
Special Issues
Table of Content

Open Access

ARTICLE

TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity

Anfeng Yang, Fei Kang, Wenjuan Bu*
Information Engineering University, Zhengzhou, 450001, China
* Corresponding Author: Wenjuan Bu. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.073979

Received 29 September 2025; Accepted 02 December 2025; Published online 19 January 2026

Abstract

Large language models (LLMs) have demonstrated significant capabilities in semantic understanding and code generation. However, cybersecurity tasks often require prompting the adaptation of open-source models to this domain. Despite their effectiveness, large-parameter LLMs incur substantial memory usage and runtime costs during task inference and downstream fine-tuning for cybersecurity applications. In this study, we fine-tuned six LLMs with parameters under 4 billion using LoRA (Low-Rank Adaptation) on specific cybersecurity instruction datasets, employing evaluation metrics similar to Hackmentor. Results indicate that post-fine-tuning, smaller models achieved victory or parity rates up to 85% against larger models like Qwen-1.5-14B on cybersecurity test datasets, with the best model reaching a 90% win or tie rate compared to SecGPT. Additionally, these smaller models required significantly less computational resources, reducing fine-tuning times by up to 53% and enhancing efficiency in downstream tasks. Further validation showed that with minimal fine-tuning, our models achieved a performance gain of 21.66% to 31.32% in tactical extraction and 30.69% to 40.42% in technical extraction tasks, significantly outperforming ChatGPT. These findings highlight the potential of smaller parameter LLMs for optimizing performance and resource utilization in cybersecurity applications including methods such as technique and tactic extraction. It will facilitate future research on the application of small-parameter large language models in the cybersecurity domain.

Keywords

Tinyllm; fine-tuning time; Elorating; SecGPT; TIME COST; cybersecurity downstream tasks
  • 121

    View

  • 12

    Download

  • 0

    Like

Share Link