Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (24)
  • Open Access

    ARTICLE

    Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization

    Amjad Rehman1,*, Tanzila Saba1, Mona M. Jamjoom2, Shaha Al-Otaibi3, Muhammad I. Khan1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.068958 - 10 November 2025

    Abstract Modern intrusion detection systems (MIDS) face persistent challenges in coping with the rapid evolution of cyber threats, high-volume network traffic, and imbalanced datasets. Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively. This study introduces an advanced, explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets, which reflects real-world network behavior through a blend of normal and diverse attack classes. The methodology begins with sophisticated data preprocessing, incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions, ensuring standardized and model-ready inputs.… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI

    Mubashir Ali1,2, Jingzhen Li1, Zedong Nie1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4153-4177, 2025, DOI:10.32604/cmes.2025.073048 - 23 December 2025

    Abstract Diabetes imposes a substantial burden on global healthcare systems. Worldwide, nearly half of individuals with diabetes remain undiagnosed, while conventional diagnostic techniques are often invasive, painful, and expensive. In this study, we propose a noninvasive approach for diabetes detection using photoplethysmography (PPG), which is widely integrated into modern wearable devices. First, we derived velocity plethysmography (VPG) and acceleration plethysmography (APG) signals from PPG to construct multi-channel waveform representations. Second, we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal, statistical, and nonlinear features with recursive feature elimination and deep feature extraction using… More >

  • Open Access

    REVIEW

    A Systematic Review of Multimodal Fusion and Explainable AI Applications in Breast Cancer Diagnosis

    Deema Alzamil1,2,*, Bader Alkhamees2, Mohammad Mehedi Hassan2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 2971-3027, 2025, DOI:10.32604/cmes.2025.070867 - 23 December 2025

    Abstract Breast cancer diagnosis relies heavily on many kinds of information from diverse sources—like mammogram images, ultrasound scans, patient records, and genetic tests—but most AI tools look at only one of these at a time, which limits their ability to produce accurate and comprehensive decisions. In recent years, multimodal learning has emerged, enabling the integration of heterogeneous data to improve performance and diagnostic accuracy. However, doctors cannot always see how or why these AI tools make their choices, which is a significant bottleneck in their reliability, along with adoption in clinical settings. Hence, people are adding… More >

  • Open Access

    REVIEW

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

    Farhad Mortezapour Shiri*, Thinagaran Perumal, Norwati Mustapha, Raihani Mohamed

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 1389-1485, 2025, DOI:10.32604/cmes.2025.071858 - 26 November 2025

    Abstract Human Activity Recognition (HAR) represents a rapidly advancing research domain, propelled by continuous developments in sensor technologies and the Internet of Things (IoT). Deep learning has become the dominant paradigm in sensor-based HAR systems, offering significant advantages over traditional machine learning methods by eliminating manual feature extraction, enhancing recognition accuracy for complex activities, and enabling the exploitation of unlabeled data through generative models. This paper provides a comprehensive review of recent advancements and emerging trends in deep learning models developed for sensor-based human activity recognition (HAR) systems. We begin with an overview of fundamental HAR… More > Graphic Abstract

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

  • Open Access

    ARTICLE

    An Impact-Aware and Taxonomy-Driven Explainable Machine Learning Framework with Edge Computing for Security in Industrial IoT–Cyber Physical Systems

    Tamara Zhukabayeva1,2, Zulfiqar Ahmad1,3,*, Nurbolat Tasbolatuly4, Makpal Zhartybayeva1, Yerik Mardenov1,4, Nurdaulet Karabayev1,*, Dilaram Baumuratova1,4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 2573-2599, 2025, DOI:10.32604/cmes.2025.070426 - 26 November 2025

    Abstract The Industrial Internet of Things (IIoT), combined with the Cyber-Physical Systems (CPS), is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems. There is a lack of explainability, challenges with imbalanced attack classes, and limited consideration of practical edge–cloud deployment strategies in prior works. In the proposed study, we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations (SHAP)-based Explainable AI (XAI) to attack detection and classification in IIoT-CPS settings. It includes not only unsupervised clustering (K-Means and DBSCAN) to extract… More >

  • Open Access

    ARTICLE

    Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

    Nouman Ahmad*, Changsheng Zhang

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3321-3334, 2025, DOI:10.32604/cmc.2025.067044 - 23 September 2025

    Abstract Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a… More >

  • Open Access

    ARTICLE

    Robust False Data Injection Identification Framework for Power Systems Using Explainable Deep Learning

    Ghadah Aldehim, Shakila Basheer, Ala Saleh Alluhaidan, Sapiah Sakri*

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3599-3619, 2025, DOI:10.32604/cmc.2025.065643 - 23 September 2025

    Abstract Although digital changes in power systems have added more ways to monitor and control them, these changes have also led to new cyber-attack risks, mainly from False Data Injection (FDI) attacks. If this happens, the sensors and operations are compromised, which can lead to big problems, disruptions, failures and blackouts. In response to this challenge, this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory (Bi-LSTM) networks and employs explanatory methods from Artificial Intelligence (AI). Not only does the suggested architecture detect potential fraud with high accuracy, but it also… More >

  • Open Access

    ARTICLE

    An Efficient Explainable AI Model for Accurate Brain Tumor Detection Using MRI Images

    Fatma M. Talaat1,2,*, Mohamed Salem1, Mohamed Shehata3,4,*, Warda M. Shaban5

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.2, pp. 2325-2358, 2025, DOI:10.32604/cmes.2025.067195 - 31 August 2025

    Abstract The diagnosis of brain tumors is an extended process that significantly depends on the expertise and skills of radiologists. The rise in patient numbers has substantially elevated the data processing volume, making conventional methods both costly and inefficient. Recently, Artificial Intelligence (AI) has gained prominence for developing automated systems that can accurately diagnose or segment brain tumors in a shorter time frame. Many researchers have examined various algorithms that provide both speed and accuracy in detecting and classifying brain tumors. This paper proposes a new model based on AI, called the Brain Tumor Detection (BTD)… More >

Displaying 1-10 on page 1 of 24. Per Page