Open Access iconOpen Access

REVIEW

Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

1 Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka, 72388, Al-Jouf, Saudi Arabia
2 College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, China
3 School of Computing, Engineering and the Built Environment, University of Roehampton, London, SW155PJ, UK
4 Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakaka, 72388, Al-Jouf, Saudi Arabia

* Corresponding Authors: Khulud Salem Alshudukhi. Email: email; Mamoona Humayun. Email: email

Computer Modeling in Engineering & Sciences 2025, 145(3), 3029-3085. https://doi.org/10.32604/cmes.2025.073705

Abstract

Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a path for developing robust, next-generation XAI frameworks. Solution: This review provides a systematic examination of XAI techniques (e.g., SHAP, LIME, Grad-CAM) and their applications in intrusion detection, malware analysis, and fraud prevention. It critically evaluates the security risks posed by XAI, including model inversion and explanation-guided evasion attacks, and assesses corresponding defense strategies such as adversarially robust training, differential privacy, and secure-XAI deployment patterns. Contribution: The primary contributions of this work are: (1) a comparative analysis of XAI methods tailored for cybersecurity contexts; (2) an identification of the critical trade-off between model interpretability and security robustness; (3) a synthesis of defense mechanisms to mitigate XAI-specific vulnerabilities; and (4) a forward-looking perspective proposing future research directions, including quantum-safe XAI, hybrid neuro-symbolic models, and the integration of XAI into Zero Trust Architectures. This review serves as a foundational resource for developing transparent, trustworthy, and resilient AI-driven cybersecurity systems.

Keywords

Explainable AI (XAI); cybersecurity; adversarial robustness; privacy-preserving techniques; regulatory compliance; zero trust architecture

Cite This Article

APA Style
Alshudukhi, K.S., Ali, S., Humayun, M., Alruwaili, O. (2025). Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation. Computer Modeling in Engineering & Sciences, 145(3), 3029–3085. https://doi.org/10.32604/cmes.2025.073705
Vancouver Style
Alshudukhi KS, Ali S, Humayun M, Alruwaili O. Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation. Comput Model Eng Sci. 2025;145(3):3029–3085. https://doi.org/10.32604/cmes.2025.073705
IEEE Style
K. S. Alshudukhi, S. Ali, M. Humayun, and O. Alruwaili, “Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation,” Comput. Model. Eng. Sci., vol. 145, no. 3, pp. 3029–3085, 2025. https://doi.org/10.32604/cmes.2025.073705



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 291

    View

  • 57

    Download

  • 0

    Like

Share Link