Open Access
ARTICLE
Syntax-Aware Hierarchical Attention Networks for Code Vulnerability Detection
School of Computer and Communication, Lanzhou University of Technology, Lanzhou, 730050, China
* Corresponding Author: Baofeng Duan. Email:
Computers, Materials & Continua 2026, 86(1), 1-22. https://doi.org/10.32604/cmc.2025.069423
Received 23 June 2025; Accepted 24 September 2025; Issue published 10 November 2025
Abstract
In the context of modern software development characterized by increasing complexity and compressed development cycles, traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates. This paper proposes a Syntax-Aware Hierarchical Attention Network (SAHAN) model, which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms. The SAHAN model first generates Syntax Independent Units (SIUs), which slices the code based on Abstract Syntax Tree (AST) and predefined grammar rules, retaining vulnerability-sensitive contexts. Following this, through a hierarchical attention mechanism, the local syntax-aware layer encodes fine-grained patterns within SIUs, while the global semantic correlation layer captures vulnerability chains across SIUs, achieving synergistic modeling of syntax and semantics. Experiments show that on benchmark datasets like QEMU, SAHAN significantly improves detection performance by 4.8% to 13.1% on average compared to baseline models such as Devign and VulDeePecker.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools