Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.062080
Special Issues
Table of Content

Open Access

ARTICLE

A Study on the Inter-Pretability of Network Attack Prediction Models Based on Light Gradient Boosting Machine (LGBM) and SHapley Additive exPlanations (SHAP)

Shuqin Zhang1, Zihao Wang1,*, Xinyu Su2
1 School of Computer Science, Zhongyuan University of Technology, Zhengzhou, 450000, China
2 School of Cyberspace Security, Information Engineering University, Zhengzhou, 450000, China
* Corresponding Author: Zihao Wang. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.062080

Received 09 December 2024; Accepted 01 April 2025; Published online 21 April 2025

Abstract

The methods of network attacks have become increasingly sophisticated, rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats effectively. In recent years, artificial intelligence has achieved significant progress in the field of network security. However, many challenges and issues remain, particularly regarding the interpretability of deep learning and ensemble learning algorithms. To address the challenge of enhancing the interpretability of network attack prediction models, this paper proposes a method that combines Light Gradient Boosting Machine (LGBM) and SHapley Additive exPlanations (SHAP). LGBM is employed to model anomalous fluctuations in various network indicators, enabling the rapid and accurate identification and prediction of potential network attack types, thereby facilitating the implementation of timely defense measures, the model achieved an accuracy of 0.977, precision of 0.985, recall of 0.975, and an F1 score of 0.979, demonstrating better performance compared to other models in the domain of network attack prediction. SHAP is utilized to analyze the black-box decision-making process of the model, providing interpretability by quantifying the contribution of each feature to the prediction results and elucidating the relationships between features. The experimental results demonstrate that the network attack prediction model based on LGBM exhibits superior accuracy and outstanding predictive capabilities. Moreover, the SHAP-based interpretability analysis significantly improves the model’s transparency and interpretability.

Keywords

Artificial intelligence; network attack prediction; light gradient boosting machine (LGBM); SHapley Additive exPlanations (SHAP); interpretability
  • 174

    View

  • 60

    Download

  • 0

    Like

Share Link