Open Access iconOpen Access

ARTICLE

TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity

Anfeng Yang, Fei Kang, Wenjuan Bu*

Information Engineering University, Zhengzhou, 450001, China

* Corresponding Author: Wenjuan Bu. Email: email

Computers, Materials & Continua 2026, 87(2), 39 https://doi.org/10.32604/cmc.2025.073979

Abstract

Large language models (LLMs) have demonstrated significant capabilities in semantic understanding and code generation. However, cybersecurity tasks often require prompting the adaptation of open-source models to this domain. Despite their effectiveness, large-parameter LLMs incur substantial memory usage and runtime costs during task inference and downstream fine-tuning for cybersecurity applications. In this study, we fine-tuned six LLMs with parameters under 4 billion using LoRA (Low-Rank Adaptation) on specific cybersecurity instruction datasets, employing evaluation metrics similar to Hackmentor. Results indicate that post-fine-tuning, smaller models achieved victory or parity rates up to 85% against larger models like Qwen-1.5-14B on cybersecurity test datasets, with the best model reaching a 90% win or tie rate compared to SecGPT. Additionally, these smaller models required significantly less computational resources, reducing fine-tuning times by up to 53% and enhancing efficiency in downstream tasks. Further validation showed that with minimal fine-tuning, our models achieved a performance gain of 21.66% to 31.32% in tactical extraction and 30.69% to 40.42% in technical extraction tasks, significantly outperforming ChatGPT. These findings highlight the potential of smaller parameter LLMs for optimizing performance and resource utilization in cybersecurity applications including methods such as technique and tactic extraction. It will facilitate future research on the application of small-parameter large language models in the cybersecurity domain.

Keywords

Tinyllm; fine-tuning time; Elorating; SecGPT; TIME COST; cybersecurity downstream tasks

Cite This Article

APA Style
Yang, A., Kang, F., Bu, W. (2026). TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity. Computers, Materials & Continua, 87(2), 39. https://doi.org/10.32604/cmc.2025.073979
Vancouver Style
Yang A, Kang F, Bu W. TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity. Comput Mater Contin. 2026;87(2):39. https://doi.org/10.32604/cmc.2025.073979
IEEE Style
A. Yang, F. Kang, and W. Bu, “TinySecGPT: Small-Parameter LLMS Can Outperform Large-Parameter LLMS in Cybersecurity,” Comput. Mater. Contin., vol. 87, no. 2, pp. 39, 2026. https://doi.org/10.32604/cmc.2025.073979



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 420

    View

  • 71

    Download

  • 0

    Like

Share Link