Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (90)
  • Open Access

    ARTICLE

    Building Regulatory Confidence with Human-in-the-Loop AI in Paperless GMP Validation

    Manaliben Amin*

    Journal on Artificial Intelligence, Vol.8, pp. 1-18, 2026, DOI:10.32604/jai.2026.073895 - 07 January 2026

    Abstract Artificial intelligence (AI) is steadily making its way into pharmaceutical validation, where it promises faster documentation, smarter testing strategies, and better handling of deviations. These gains are attractive, but in a regulated environment speed is never enough. Regulators want assurance that every system is reliable, that decisions are explainable, and that human accountability remains central. This paper sets out a Human-in-the-Loop (HITL) AI approach for Computer System Validation (CSV) and Computer Software Assurance (CSA). It relies on explainable AI (XAI) tools but keeps structured human review in place, so automation can be used without creating… More >

  • Open Access

    ARTICLE

    Log-Based Anomaly Detection of System Logs Using Graph Neural Network

    Eman Alsalmi, Abeer Alhuzali*, Areej Alhothali

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-20, 2026, DOI:10.32604/cmc.2025.071012 - 09 December 2025

    Abstract Log anomaly detection is essential for maintaining the reliability and security of large-scale networked systems. Most traditional techniques rely on log parsing in the reprocessing stage and utilize handcrafted features that limit their adaptability across various systems. In this study, we propose a hybrid model, BertGCN, that integrates BERT-based contextual embedding with Graph Convolutional Networks (GCNs) to identify anomalies in raw system logs, thereby eliminating the need for log parsing. The BERT module captures semantic representations of log messages, while the GCN models the structural relationships among log entries through a text-based graph. This combination More >

  • Open Access

    ARTICLE

    A Deep Learning Framework for Heart Disease Prediction with Explainable Artificial Intelligence

    Muhammad Adil1, Nadeem Javaid1,*, Imran Ahmed2, Abrar Ahmed3, Nabil Alrajeh4,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-20, 2026, DOI:10.32604/cmc.2025.071215 - 10 November 2025

    Abstract Heart disease remains a leading cause of mortality worldwide, emphasizing the urgent need for reliable and interpretable predictive models to support early diagnosis and timely intervention. However, existing Deep Learning (DL) approaches often face several limitations, including inefficient feature extraction, class imbalance, suboptimal classification performance, and limited interpretability, which collectively hinder their deployment in clinical settings. To address these challenges, we propose a novel DL framework for heart disease prediction that integrates a comprehensive preprocessing pipeline with an advanced classification architecture. The preprocessing stage involves label encoding and feature scaling. To address the issue of… More >

  • Open Access

    ARTICLE

    Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization

    Amjad Rehman1,*, Tanzila Saba1, Mona M. Jamjoom2, Shaha Al-Otaibi3, Muhammad I. Khan1

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-15, 2026, DOI:10.32604/cmc.2025.068958 - 10 November 2025

    Abstract Modern intrusion detection systems (MIDS) face persistent challenges in coping with the rapid evolution of cyber threats, high-volume network traffic, and imbalanced datasets. Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively. This study introduces an advanced, explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets, which reflects real-world network behavior through a blend of normal and diverse attack classes. The methodology begins with sophisticated data preprocessing, incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions, ensuring standardized and model-ready inputs.… More >

  • Open Access

    ARTICLE

    Graph-Based Intrusion Detection with Explainable Edge Classification Learning

    Jaeho Shin1, Jaekwang Kim2,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-26, 2026, DOI:10.32604/cmc.2025.068767 - 10 November 2025

    Abstract Network attacks have become a critical issue in the internet security domain. Artificial intelligence technology-based detection methodologies have attracted attention; however, recent studies have struggled to adapt to changing attack patterns and complex network environments. In addition, it is difficult to explain the detection results logically using artificial intelligence. We propose a method for classifying network attacks using graph models to explain the detection results. First, we reconstruct the network packet data into a graphical structure. We then use a graph model to predict network attacks using edge classification. To explain the prediction results, we… More >

  • Open Access

    ARTICLE

    LinguTimeX a Framework for Multilingual CTC Detection Using Explainable AI and Natural Language Processing

    Omar Darwish1, Shorouq Al-Eidi2, Abdallah Al-Shorman1, Majdi Maabreh3, Anas Alsobeh4, Plamen Zahariev5, Yahya Tashtoush6,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-21, 2026, DOI:10.32604/cmc.2025.068266 - 10 November 2025

    Abstract Covert timing channels (CTC) exploit network resources to establish hidden communication pathways, posing significant risks to data security and policy compliance. Therefore, detecting such hidden and dangerous threats remains one of the security challenges. This paper proposes LinguTimeX, a new framework that combines natural language processing with artificial intelligence, along with explainable Artificial Intelligence (AI) not only to detect CTC but also to provide insights into the decision process. LinguTimeX performs multidimensional feature extraction by fusing linguistic attributes with temporal network patterns to identify covert channels precisely. LinguTimeX demonstrates strong effectiveness in detecting CTC across… More >

  • Open Access

    ARTICLE

    Explainable Machine Learning for Phishing Detection: Bridging Technical Efficacy and Legal Accountability in Cyberspace Security

    MD Hamid Borkot Tulla1,*, MD Moniur Rahman Ratan2, Rashid MD Mamunur3, Abdullah Hil Safi Sohan4, MD Matiur Rahman5

    Journal of Cyber Security, Vol.7, pp. 675-691, 2025, DOI:10.32604/jcs.2025.074737 - 24 December 2025

    Abstract Phishing is considered one of the most widespread cybercrimes due to the fact that it combines both technical and human vulnerabilities with the intention of stealing sensitive information. Traditional blacklist and heuristic-based defenses fail to detect such emerging attack patterns; hence, intelligent and transparent detection systems are needed. This paper proposes an explainable machine learning framework that integrates predictive performance with regulatory accountability. Four models were trained and tested on a balanced dataset of 10,000 URLs, comprising 5000 phishing and 5000 legitimate samples, each characterized by 48 lexical and content-based features: Decision Tree, XGBoost, Logistic… More >

  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    PPG Based Digital Biomarker for Diabetes Detection with Multiset Spatiotemporal Feature Fusion and XAI

    Mubashir Ali1,2, Jingzhen Li1, Zedong Nie1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4153-4177, 2025, DOI:10.32604/cmes.2025.073048 - 23 December 2025

    Abstract Diabetes imposes a substantial burden on global healthcare systems. Worldwide, nearly half of individuals with diabetes remain undiagnosed, while conventional diagnostic techniques are often invasive, painful, and expensive. In this study, we propose a noninvasive approach for diabetes detection using photoplethysmography (PPG), which is widely integrated into modern wearable devices. First, we derived velocity plethysmography (VPG) and acceleration plethysmography (APG) signals from PPG to construct multi-channel waveform representations. Second, we introduced a novel multiset spatiotemporal feature fusion framework that integrates hand-crafted temporal, statistical, and nonlinear features with recursive feature elimination and deep feature extraction using… More >

Displaying 1-10 on page 1 of 90. Per Page