Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (7)
  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    ARTICLE

    An IoT-Enabled Hybrid DRL-XAI Framework for Transparent Urban Water Management

    Qamar H. Naith1,*, H. Mancy2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.1, pp. 387-405, 2025, DOI:10.32604/cmes.2025.066917 - 31 July 2025

    Abstract Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained. With improved control systems in place to check leakage, variability of pressure, and conscientiousness of energy, issues that previously went unnoticed are now becoming recognized. This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning (MADRL) with Shapley Additive Explanations (SHAP)-based Explainable AI (XAI) for adaptive and interpretable water resource management. In the methodology, the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time… More >

  • Open Access

    ARTICLE

    FSFS: A Novel Statistical Approach for Fair and Trustworthy Impactful Feature Selection in Artificial Intelligence Models

    Ali Hamid Farea1,*, Iman Askerzade1,2, Omar H. Alhazmi3, Savaş Takan4

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1457-1484, 2025, DOI:10.32604/cmc.2025.064872 - 09 June 2025

    Abstract Feature selection (FS) is a pivotal pre-processing step in developing data-driven models, influencing reliability, performance and optimization. Although existing FS techniques can yield high-performance metrics for certain models, they do not invariably guarantee the extraction of the most critical or impactful features. Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features. However, the challenge of discerning the most relevant and influential features persists, particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial… More >

  • Open Access

    ARTICLE

    Intrumer: A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment

    Nazreen Banu A*, S.K.B. Sangeetha

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 579-607, 2025, DOI:10.32604/cmc.2024.059805 - 03 January 2025

    Abstract The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic. Cloud environments pose significant challenges in maintaining privacy and security. Global approaches, such as IDS, have been developed to tackle these issues. However, most conventional Intrusion Detection System (IDS) models struggle with unseen cyberattacks and complex high-dimensional data. In fact, this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system, named INTRUMER, which offers balanced accuracy, reliability, and security in cloud settings by multiple modules working together within it. The traffic captured… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence (XAI) Model for Cancer Image Classification

    Amit Singhal1, Krishna Kant Agrawal2, Angeles Quezada3, Adrian Rodriguez Aguiñaga4, Samantha Jiménez4, Satya Prakash Yadav5,,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 401-441, 2024, DOI:10.32604/cmes.2024.051363 - 20 August 2024

    Abstract The use of Explainable Artificial Intelligence (XAI) models becomes increasingly important for making decisions in smart healthcare environments. It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms. These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence. Nevertheless, the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images. This research presents an advanced investigation of XAI models to classify cancer images. It describes the different levels of explainability… More >

  • Open Access

    ARTICLE

    XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly

    Yuna Han1, Hangbae Chang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 221-237, 2023, DOI:10.32604/cmc.2023.039463 - 08 June 2023

    Abstract Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653 - 19 July 2022

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult… More >

Displaying 1-10 on page 1 of 7. Per Page