TY - EJOU AU - Zhu, Gan AU - Yu, Yongtao AU - Deng, Xiaofan AU - Dai, Yuanchen AU - Li, Zhenyuan TI - A Hybrid Split-Attention and Transformer Architecture for High-Performance Network Intrusion Detection T2 - Computer Modeling in Engineering \& Sciences PY - 2025 VL - 145 IS - 3 SN - 1526-1506 AB - Existing deep learning Network Intrusion Detection Systems (NIDS) struggle to simultaneously capture fine-grained, multi-scale features and long-range temporal dependencies. To address this gap, this paper introduces TransNeSt, a hybrid architecture integrating a ResNeSt block (using split-attention for multi-scale feature representation) with a Transformer encoder (using self-attention for global temporal modeling). This integration of multi-scale and temporal attention was validated on four benchmarks: NSL-KDD, UNSW-NB15, CIC-IDS2017, and CICIOT2023. TransNeSt consistently outperformed its individual components and several state-of-the-art models, demonstrating significant quantitative gains. The model achieved high efficacy across all datasets, with F1-Scores of 99.04% (NSL-KDD), 91.92% (UNSW-NB15), 99.18% (CIC-IDS2017), and 97.85% (CICIOT2023), confirming its robustness. KW - Intrusion detection; transformer; resnest; split attention; deep learning DO - 10.32604/cmes.2025.074349