Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (26)
  • Open Access

    ARTICLE

    Enhancing Anomaly Detection with Causal Reasoning and Semantic Guidance

    Weishan Gao1,2, Ye Wang1,2, Xiaoyin Wang1,2, Xiaochuan Jing1,2,*

    CMC-Computers, Materials & Continua, Vol.86, No.3, 2026, DOI:10.32604/cmc.2025.073850 - 12 January 2026

    Abstract In the field of intelligent surveillance, weakly supervised video anomaly detection (WSVAD) has garnered widespread attention as a key technology that identifies anomalous events using only video-level labels. Although multiple instance learning (MIL) has dominated the WSVAD for a long time, its reliance solely on video-level labels without semantic grounding hinders a fine-grained understanding of visually similar yet semantically distinct events. In addition, insufficient temporal modeling obscures causal relationships between events, making anomaly decisions reactive rather than reasoning-based. To overcome the limitations above, this paper proposes an adaptive knowledge-based guidance method that integrates external structured… More >

  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    ARTICLE

    Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization

    Amjad Rehman1,*, Tanzila Saba1, Mona M. Jamjoom2, Shaha Al-Otaibi3, Muhammad I. Khan1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.068958 - 10 November 2025

    Abstract Modern intrusion detection systems (MIDS) face persistent challenges in coping with the rapid evolution of cyber threats, high-volume network traffic, and imbalanced datasets. Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively. This study introduces an advanced, explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets, which reflects real-world network behavior through a blend of normal and diverse attack classes. The methodology begins with sophisticated data preprocessing, incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions, ensuring standardized and model-ready inputs.… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI

    Mubashir Ali1,2, Jingzhen Li1, Zedong Nie1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4153-4177, 2025, DOI:10.32604/cmes.2025.073048 - 23 December 2025

    Abstract Diabetes imposes a substantial burden on global healthcare systems. Worldwide, nearly half of individuals with diabetes remain undiagnosed, while conventional diagnostic techniques are often invasive, painful, and expensive. In this study, we propose a noninvasive approach for diabetes detection using photoplethysmography (PPG), which is widely integrated into modern wearable devices. First, we derived velocity plethysmography (VPG) and acceleration plethysmography (APG) signals from PPG to construct multi-channel waveform representations. Second, we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal, statistical, and nonlinear features with recursive feature elimination and deep feature extraction using… More >

  • Open Access

    REVIEW

    A Systematic Review of Multimodal Fusion and Explainable AI Applications in Breast Cancer Diagnosis

    Deema Alzamil1,2,*, Bader Alkhamees2, Mohammad Mehedi Hassan2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 2971-3027, 2025, DOI:10.32604/cmes.2025.070867 - 23 December 2025

    Abstract Breast cancer diagnosis relies heavily on many kinds of information from diverse sources—like mammogram images, ultrasound scans, patient records, and genetic tests—but most AI tools look at only one of these at a time, which limits their ability to produce accurate and comprehensive decisions. In recent years, multimodal learning has emerged, enabling the integration of heterogeneous data to improve performance and diagnostic accuracy. However, doctors cannot always see how or why these AI tools make their choices, which is a significant bottleneck in their reliability, along with adoption in clinical settings. Hence, people are adding… More >

  • Open Access

    REVIEW

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

    Farhad Mortezapour Shiri*, Thinagaran Perumal, Norwati Mustapha, Raihani Mohamed

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 1389-1485, 2025, DOI:10.32604/cmes.2025.071858 - 26 November 2025

    Abstract Human Activity Recognition (HAR) represents a rapidly advancing research domain, propelled by continuous developments in sensor technologies and the Internet of Things (IoT). Deep learning has become the dominant paradigm in sensor-based HAR systems, offering significant advantages over traditional machine learning methods by eliminating manual feature extraction, enhancing recognition accuracy for complex activities, and enabling the exploitation of unlabeled data through generative models. This paper provides a comprehensive review of recent advancements and emerging trends in deep learning models developed for sensor-based human activity recognition (HAR) systems. We begin with an overview of fundamental HAR… More > Graphic Abstract

    Deep Learning and Federated Learning in Human Activity Recognition with Sensor Data: A Comprehensive Review

  • Open Access

    ARTICLE

    An Impact-Aware and Taxonomy-Driven Explainable Machine Learning Framework with Edge Computing for Security in Industrial IoT–Cyber Physical Systems

    Tamara Zhukabayeva1,2, Zulfiqar Ahmad1,3,*, Nurbolat Tasbolatuly4, Makpal Zhartybayeva1, Yerik Mardenov1,4, Nurdaulet Karabayev1,*, Dilaram Baumuratova1,4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.2, pp. 2573-2599, 2025, DOI:10.32604/cmes.2025.070426 - 26 November 2025

    Abstract The Industrial Internet of Things (IIoT), combined with the Cyber-Physical Systems (CPS), is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems. There is a lack of explainability, challenges with imbalanced attack classes, and limited consideration of practical edge–cloud deployment strategies in prior works. In the proposed study, we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations (SHAP)-based Explainable AI (XAI) to attack detection and classification in IIoT-CPS settings. It includes not only unsupervised clustering (K-Means and DBSCAN) to extract… More >

  • Open Access

    ARTICLE

    Interpretable Vulnerability Detection in LLMs: A BERT-Based Approach with SHAP Explanations

    Nouman Ahmad*, Changsheng Zhang

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 3321-3334, 2025, DOI:10.32604/cmc.2025.067044 - 23 September 2025

    Abstract Source code vulnerabilities present significant security threats, necessitating effective detection techniques. Rigid rule-sets and pattern matching are the foundation of traditional static analysis tools, which drown developers in false positives and miss context-sensitive vulnerabilities. Large Language Models (LLMs) like BERT, in particular, are examples of artificial intelligence (AI) that exhibit promise but frequently lack transparency. In order to overcome the issues with model interpretability, this work suggests a BERT-based LLM strategy for vulnerability detection that incorporates Explainable AI (XAI) methods like SHAP and attention heatmaps. Furthermore, to ensure auditable and comprehensible choices, we present a… More >

Displaying 1-10 on page 1 of 26. Per Page