Open Access
ARTICLE
X-MalNet: A CNN-Based Malware Detection Model with Visual and Structural Interpretability
1 Department of Mathematics, Amrita School of Physical Sciences, Amrita Vishwa Vidyapeetham, Coimbatore, 641112, India
2 Department of Electrical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
3 School of Computing, Gachon University, Seongnam-si, 13120, Republic of Korea
4 Faculty of Engineering, University of Moncton, Moncton, NB E1A 3E9, Canada
5 School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa
6 Research Unit, School International Institute of Technology and Management (IITG), Av. Grandes Ecoles, Libreville, BP 1989, Gabon
7 College of Computer Science and Engineering (Invited Professor), University of Ha’il, Ha’il, 55476, Saudi Arabia
* Corresponding Author: Ateeq Ur Rehman. Email:
(This article belongs to the Special Issue: Advances in Machine Learning and Artificial Intelligence for Intrusion Detection Systems)
Computers, Materials & Continua 2026, 86(2), 1-18. https://doi.org/10.32604/cmc.2025.069951
Received 04 July 2025; Accepted 29 October 2025; Issue published 09 December 2025
Abstract
The escalating complexity of modern malware continues to undermine the effectiveness of traditional signature-based detection techniques, which are often unable to adapt to rapidly evolving attack patterns. To address these challenges, this study proposes X-MalNet, a lightweight Convolutional Neural Network (CNN) framework designed for static malware classification through image-based representations of binary executables. By converting malware binaries into grayscale images, the model extracts distinctive structural and texture-level features that signify malicious intent, thereby eliminating the dependence on manual feature engineering or dynamic behavioral analysis. Built upon a modified AlexNet architecture, X-MalNet employs transfer learning to enhance generalization and reduce computational cost, enabling efficient training and deployment on limited hardware resources. To promote interpretability and transparency, the framework integrates Gradient-weighted Class Activation Mapping (Grad-CAM) and Deep SHapley Additive exPlanations (DeepSHAP), offering spatial and pixel-level visualizations that reveal how specific image regions influence classification outcomes. These explainability components support security analysts in validating the model’s reasoning, strengthening confidence in AI-assisted malware detection. Comprehensive experiments on the Malimg and Malevis benchmark datasets confirm the superior performance of X-MalNet, achieving classification accuracies of 99.15% and 98.72%, respectively. Further robustness evaluations using Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) adversarial attacks demonstrate the model’s resilience against perturbed inputs. In conclusion, X-MalNet emerges as a scalable, interpretable, and robust malware detection framework that effectively balances accuracy, efficiency, and explainability. Its lightweight design and adversarial stability position it as a promising solution for real-world cybersecurity deployments, advancing the development of trustworthy, automated, and transparent malware classification systems.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools