Open Access
ARTICLE
A Hybrid Split-Attention and Transformer Architecture for High-Performance Network Intrusion Detection
1 School of Software, Yunnan University, Kunming, 650504, China
2 Yunnan Key Laboratory of Smart City in Cyberspace Security, Yuxi Normal University, Yuxi, 653100, China
3 School of Information Science and Technology, Yunnan Normal University, Kunming, 650500, China
* Corresponding Author: Yongtao Yu. Email:
Computer Modeling in Engineering & Sciences 2025, 145(3), 4317-4348. https://doi.org/10.32604/cmes.2025.074349
Received 09 October 2025; Accepted 24 November 2025; Issue published 23 December 2025
Abstract
Existing deep learning Network Intrusion Detection Systems (NIDS) struggle to simultaneously capture fine-grained, multi-scale features and long-range temporal dependencies. To address this gap, this paper introduces TransNeSt, a hybrid architecture integrating a ResNeSt block (using split-attention for multi-scale feature representation) with a Transformer encoder (using self-attention for global temporal modeling). This integration of multi-scale and temporal attention was validated on four benchmarks: NSL-KDD, UNSW-NB15, CIC-IDS2017, and CICIOT2023. TransNeSt consistently outperformed its individual components and several state-of-the-art models, demonstrating significant quantitative gains. The model achieved high efficacy across all datasets, with F1-Scores of 99.04% (NSL-KDD), 91.92% (UNSW-NB15), 99.18% (CIC-IDS2017), and 97.85% (CICIOT2023), confirming its robustness.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools