Open Access iconOpen Access

ARTICLE

Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

Nouman Ahmad*, Changsheng Zhang

School of Software Engineering, Northeastern University, Shenyang, 110819, China

* Corresponding Author: Nouman Ahmad. Email: email

(This article belongs to the Special Issue: Utilizing and Securing Large Language Models for Cybersecurity and Beyond)

Computers, Materials & Continua 2025, 85(2), 3321-3334. https://doi.org/10.32604/cmc.2025.067044

Abstract

Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a transparency obligation structure that covers the whole LLM lifetime. Our experiments on a comprehensive and extensive source code DiverseVul dataset show that the proposed method outperform, attaining 92.3% detection accuracy and surpassing CodeT5 (89.4%), GPT-3.5 (85.1%), and GPT-4 (88.7%) under the same evaluation scenario. Through integrated SHAP analysis, this exhibits improved detection capabilities while preserving explainability, which is a crucial advantage over black-box LLM alternatives in security contexts. The XAI analysis discovers crucial predictive tokens such as susceptible and function through SHAP framework. Furthermore, the local token interactions that support the decision-making of the model process are graphically highlighted via attention heatmaps. This method provides a workable solution for reliable vulnerability identification in software systems by effectively fusing high detection accuracy with model explainability. Our findings imply that transparent AI models are capable of successfully detecting security flaws while preserving interpretability for human analysts.

Keywords

Attention mechanisms; CodeBERT; explainable AI (XAI) for security; large language model (LLM); trustworthy AI; vulnerability detection

Cite This Article

APA Style
Ahmad, N., Zhang, C. (2025). Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations. Computers, Materials & Continua, 85(2), 3321–3334. https://doi.org/10.32604/cmc.2025.067044
Vancouver Style
Ahmad N, Zhang C. Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations. Comput Mater Contin. 2025;85(2):3321–3334. https://doi.org/10.32604/cmc.2025.067044
IEEE Style
N. Ahmad and C. Zhang, “Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations,” Comput. Mater. Contin., vol. 85, no. 2, pp. 3321–3334, 2025. https://doi.org/10.32604/cmc.2025.067044



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1146

    View

  • 540

    Download

  • 0

    Like

Share Link