Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (12)
  • Open Access

    ARTICLE

    From Hardening to Understanding: Adversarial Training vs. CF-Aug for Explainable Cyber-Threat Detection System

    Malik Al-Essa1,*, Mohammad Qatawneh2,1, Ahmad Sami Al-Shamayleh3, Orieb Abualghanam1, Wesam Almobaideen4,1

    CMC-Computers, Materials & Continua, Vol.87, No.3, 2026, DOI:10.32604/cmc.2026.076608 - 09 April 2026

    Abstract Machine Learning (ML) intrusion detection systems (IDS) are vulnerable to manipulations: small, protocol-valid manipulations can push samples across brittle decision boundaries. We study two complementary remedies that reshape the learner in distinct ways. Adversarial Training (AT) exposes the model to worst-case, in-threat perturbations during learning to thicken local margins; Counterfactual Augmentation (CF-Aug) adds near-boundary exemplars that are explicitly constrained to be feasible, causally consistent, and operationally meaningful for defenders. The main goal of this work is to investigate and compare how AT and CF-Aug can reshape the decision surface of the IDS. eXplainable Artificial Intelligence More >

  • Open Access

    REVIEW

    Survey of AI-Based Threat Detection for Illicit Web Ecosystems: Models, Modalities, and Emerging Trends

    Jaeho Hwang1, Moohong Min2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.3, 2026, DOI:10.32604/cmes.2026.078940 - 30 March 2026

    Abstract Illicit web ecosystems, encompassing phishing, illegal online gambling, scam platforms, and malicious advertising, have rapidly expanded in scale and complexity, creating severe social, financial, and cybersecurity risks. Traditional rule-based and blacklist-driven detection approaches struggle to cope with polymorphic, multilingual, and adversarially manipulated threats, resulting in increasing demand for Artificial Intelligence (AI)-based solutions. This review provides a comprehensive synthesis of research on AI-driven threat detection for illicit web environments. It surveys detection models across multiple modalities, including text-based analysis of Uniform Resource Locator (URL) and HyperText Markup Language (HTML), vision-based recognition of webpage layouts and logos,… More >

  • Open Access

    ARTICLE

    Model Agnostic Meta Learning Ensemble Based Prediction of Motor Imagery Tasks Using EEG Signals

    Fazal Ur Rehman1, Yazeed Alkhrijah2, Syed Muhammad Usman3, Muhammad Irfan1,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.2, 2026, DOI:10.32604/cmes.2026.076332 - 26 February 2026

    Abstract Automated detection of Motor Imagery (MI) tasks is extremely useful for prosthetic arms and legs of stroke patients for their rehabilitation. Prediction of MI tasks can be performed with the help of Electroencephalogram (EEG) signals recorded by placing electrodes on the scalp of subjects; however, accurate prediction of MI tasks remains a challenge due to noise that is incurred during the EEG signal recording process, the extraction of a feature vector with high interclass variance, and accurate classification. The proposed method consists of preprocessing, feature extraction, and classification. First, EEG signals are denoised using a… More >

  • Open Access

    REVIEW

    The Transparency Revolution in Geohazard Science: A Systematic Review and Research Roadmap for Explainable Artificial Intelligence

    Moein Tosan1,*, Vahid Nourani2,3, Ozgur Kisi4,5,6, Yongqiang Zhang7, Sameh A. Kantoush8, Mekonnen Gebremichael9, Ruhollah Taghizadeh-Mehrjardi10, Jinhui Jeanne Huang11

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.074768 - 29 January 2026

    Abstract The integration of machine learning (ML) into geohazard assessment has successfully instigated a paradigm shift, leading to the production of models that possess a level of predictive accuracy previously considered unattainable. However, the black-box nature of these systems presents a significant barrier, hindering their operational adoption, regulatory approval, and full scientific validation. This paper provides a systematic review and synthesis of the emerging field of explainable artificial intelligence (XAI) as applied to geohazard science (GeoXAI), a domain that aims to resolve the long-standing trade-off between model performance and interpretability. A rigorous synthesis of 87 foundational… More >

  • Open Access

    REVIEW

    Learning from Scarcity: A Review of Deep Learning Strategies for Cold-Start Energy Time-Series Forecasting

    Jihoon Moon*

    CMES-Computer Modeling in Engineering & Sciences, Vol.146, No.1, 2026, DOI:10.32604/cmes.2025.071052 - 29 January 2026

    Abstract Predicting the behavior of renewable energy systems requires models capable of generating accurate forecasts from limited historical data, a challenge that becomes especially pronounced when commissioning new facilities where operational records are scarce. This review aims to synthesize recent progress in data-efficient deep learning approaches for addressing such “cold-start” forecasting problems. It primarily covers three interrelated domains—solar photovoltaic (PV), wind power, and electrical load forecasting—where data scarcity and operational variability are most critical, while also including representative studies on hydropower and carbon emission prediction to provide a broader systems perspective. To this end, we examined… More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    An IoT-Enabled Hybrid DRL-XAI Framework for Transparent Urban Water Management

    Qamar H. Naith1,*, H. Mancy2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.1, pp. 387-405, 2025, DOI:10.32604/cmes.2025.066917 - 31 July 2025

    Abstract Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained. With improved control systems in place to check leakage, variability of pressure, and conscientiousness of energy, issues that previously went unnoticed are now becoming recognized. This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning (MADRL) with Shapley Additive Explanations (SHAP)-based Explainable AI (XAI) for adaptive and interpretable water resource management. In the methodology, the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time… More >

  • Open Access

    ARTICLE

    FSFS: A Novel Statistical Approach for Fair and Trustworthy Impactful Feature Selection in Artificial Intelligence Models

    Ali Hamid Farea1,*, Iman Askerzade1,2, Omar H. Alhazmi3, Savaş Takan4

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1457-1484, 2025, DOI:10.32604/cmc.2025.064872 - 09 June 2025

    Abstract Feature selection (FS) is a pivotal pre-processing step in developing data-driven models, influencing reliability, performance and optimization. Although existing FS techniques can yield high-performance metrics for certain models, they do not invariably guarantee the extraction of the most critical or impactful features. Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features. However, the challenge of discerning the most relevant and influential features persists, particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial… More >

  • Open Access

    ARTICLE

    Intrumer: A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment

    Nazreen Banu A*, S.K.B. Sangeetha

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 579-607, 2025, DOI:10.32604/cmc.2024.059805 - 03 January 2025

    Abstract The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic. Cloud environments pose significant challenges in maintaining privacy and security. Global approaches, such as IDS, have been developed to tackle these issues. However, most conventional Intrusion Detection System (IDS) models struggle with unseen cyberattacks and complex high-dimensional data. In fact, this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system, named INTRUMER, which offers balanced accuracy, reliability, and security in cloud settings by multiple modules working together within it. The traffic captured… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence (XAI) Model for Cancer Image Classification

    Amit Singhal1, Krishna Kant Agrawal2, Angeles Quezada3, Adrian Rodriguez Aguiñaga4, Samantha Jiménez4, Satya Prakash Yadav5,,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 401-441, 2024, DOI:10.32604/cmes.2024.051363 - 20 August 2024

    Abstract The use of Explainable Artificial Intelligence (XAI) models becomes increasingly important for making decisions in smart healthcare environments. It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms. These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence. Nevertheless, the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images. This research presents an advanced investigation of XAI models to classify cancer images. It describes the different levels of explainability… More >

Displaying 1-10 on page 1 of 12. Per Page