Open Access
ARTICLE
Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations
School of Software Engineering, Northeastern University, Shenyang, 110819, China
* Corresponding Author: Nouman Ahmad. Email:
(This article belongs to the Special Issue: Utilizing and Securing Large Language Models for Cybersecurity and Beyond)
Computers, Materials & Continua 2025, 85(2), 3321-3334. https://doi.org/10.32604/cmc.2025.067044
Received 23 April 2025; Accepted 17 July 2025; Issue published 23 September 2025
Abstract
Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a transparency obligation structure that covers the whole LLM lifetime. Our experiments on a comprehensive and extensive source code DiverseVul dataset show that the proposed method outperform, attaining 92.3% detection accuracy and surpassing CodeT5 (89.4%), GPT-3.5 (85.1%), and GPT-4 (88.7%) under the same evaluation scenario. Through integrated SHAP analysis, this exhibits improved detection capabilities while preserving explainability, which is a crucial advantage over black-box LLM alternatives in security contexts. The XAI analysis discovers crucial predictive tokens such as susceptible and function through SHAP framework. Furthermore, the local token interactions that support the decision-making of the model process are graphically highlighted via attention heatmaps. This method provides a workable solution for reliable vulnerability identification in software systems by effectively fusing high detection accuracy with model explainability. Our findings imply that transparent AI models are capable of successfully detecting security flaws while preserving interpretability for human analysts.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools