Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (32)
  • Open Access

    ARTICLE

    Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems

    Rabia Abid1, Muhammad Rizwan2, Abdulatif Alabdulatif3,*, Abdullah Alnajim4, Meznah Alamro5, Mourade Azrour6

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3413-3429, 2024, DOI:10.32604/cmc.2024.046880

    Abstract Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the security of e-healthcare systems. The… More >

  • Open Access

    ARTICLE

    Transparent and Accurate COVID-19 Diagnosis: Integrating Explainable AI with Advanced Deep Learning in CT Imaging

    Mohammad Mehedi Hassan1,*, Salman A. AlQahtani2, Mabrook S. AlRakhami1, Ahmed Zohier Elhendi3

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3101-3123, 2024, DOI:10.32604/cmes.2024.047940

    Abstract In the current landscape of the COVID-19 pandemic, the utilization of deep learning in medical imaging, especially in chest computed tomography (CT) scan analysis for virus detection, has become increasingly significant. Despite its potential, deep learning’s “black box” nature has been a major impediment to its broader acceptance in clinical environments, where transparency in decision-making is imperative. To bridge this gap, our research integrates Explainable AI (XAI) techniques, specifically the Local Interpretable Model-Agnostic Explanations (LIME) method, with advanced deep learning models. This integration forms a sophisticated and transparent framework for COVID-19 identification, enhancing the capability of standard Convolutional Neural Network… More >

  • Open Access

    ARTICLE

    Explainable Conformer Network for Detection of COVID-19 Pneumonia from Chest CT Scan: From Concepts toward Clinical Explainability

    Mohamed Abdel-Basset1, Hossam Hawash1, Mohamed Abouhawwash2,3,*, S. S. Askar4, Alshaimaa A. Tantawy1

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 1171-1187, 2024, DOI:10.32604/cmc.2023.044425

    Abstract The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans. This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis. This paper proposes a novel deep learning approach, called Conformer Network, for explainable discrimination of viral pneumonia depending on the lung Region of Infections (ROI) within a single modality radiographic CT scan. Firstly, an efficient U-shaped transformer network is integrated for lung image segmentation. Then, a robust transfer learning technique is introduced to design a robust feature… More >

  • Open Access

    ARTICLE

    Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features

    Nida Aslam1,*, Irfan Ullah Khan2, Salma Abdulrahman Bader2, Aisha Alansari3, Lama Abdullah Alaqeel2, Razan Mohammed Khormy2, Zahra Abdultawab AlKubaish2, Tariq Hussain4,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3167-3188, 2023, DOI:10.32604/cmc.2023.039721

    Abstract One of the most widely used smartphone operating systems, Android, is vulnerable to cutting-edge malware that employs sophisticated logic. Such malware attacks could lead to the execution of unauthorized acts on the victims’ devices, stealing personal information and causing hardware damage. In previous studies, machine learning (ML) has shown its efficacy in detecting malware events and classifying their types. However, attackers are continuously developing more sophisticated methods to bypass detection. Therefore, up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices. Therefore, this study employed ML algorithms to classify Android applications into malware… More >

  • Open Access

    EDITORIAL

    Grad-CAM: Understanding AI Models

    Shuihua Wang1,2, Yudong Zhang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1321-1324, 2023, DOI:10.32604/cmc.2023.041419

    Abstract This article has no abstract. More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments

    Yongsoo Lee, Yeeun Lee, Eungyu Lee, Taejin Lee*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1701-1719, 2023, DOI:10.32604/cmc.2023.040235

    Abstract Cybersecurity increasingly relies on machine learning (ML) models to respond to and detect attacks. However, the rapidly changing data environment makes model life-cycle management after deployment essential. Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models. However, detecting drift in unsupervised environments can be challenging. This study introduces a novel approach leveraging Shapley additive explanations (SHAP), a widely recognized explainability technique in ML, to address drift detection in unsupervised settings. The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a drift suspicion metric that considers… More >

  • Open Access

    ARTICLE

    Explainable AI and Interpretable Model for Insurance Premium Prediction

    Umar Abdulkadir Isa*, Anil Fernando*

    Journal on Artificial Intelligence, Vol.5, pp. 31-42, 2023, DOI:10.32604/jai.2023.040213

    Abstract Traditional machine learning metrics (TMLMs) are quite useful for the current research work precision, recall, accuracy, MSE and RMSE. Not enough for a practitioner to be confident about the performance and dependability of innovative interpretable model 85%–92%. We included in the prediction process, machine learning models (MLMs) with greater than 99% accuracy with a sensitivity of 95%–98% and specifically in the database. We need to explain the model to domain specialists through the MLMs. Human-understandable explanations in addition to ML professionals must establish trust in the prediction of our model. This is achieved by creating a model-independent, locally accurate explanation… More >

  • Open Access

    ARTICLE

    XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly

    Yuna Han1, Hangbae Chang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 221-237, 2023, DOI:10.32604/cmc.2023.039463

    Abstract Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous detection model that dynamically trains… More >

  • Open Access

    ARTICLE

    Efficient Explanation and Evaluation Methodology Based on Hybrid Feature Dropout

    Jingang Kim, Suengbum Lim, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 471-490, 2023, DOI:10.32604/csse.2023.038413

    Abstract AI-related research is conducted in various ways, but the reliability of AI prediction results is currently insufficient, so expert decisions are indispensable for tasks that require essential decision-making. XAI (eXplainable AI) is studied to improve the reliability of AI. However, each XAI methodology shows different results in the same data set and exact model. This means that XAI results must be given meaning, and a lot of noise value emerges. This paper proposes the HFD (Hybrid Feature Dropout)-based XAI and evaluation methodology. The proposed XAI methodology can mitigate shortcomings, such as incorrect feature weights and impractical feature selection. There are… More >

  • Open Access

    ARTICLE

    Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System

    Reda Salama1, Mahmoud Ragab1,2,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2917-2932, 2023, DOI:10.32604/csse.2023.037016

    Abstract In the Internet of Things (IoT) based system, the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems (UCS). The UCS necessitates heterogeneity, management level, and data transmission for distributed users. Simultaneously, security remains a major issue in the IoT-driven UCS. Besides, energy-limited IoT devices need an effective clustering strategy for optimal energy utilization. The recent developments of explainable artificial intelligence (XAI) concepts can be employed to effectively design intrusion detection systems (IDS) for accomplishing security in UCS. In this view, this study designs a novel Blockchain with Explainable Artificial Intelligence… More >

Displaying 1-10 on page 1 of 32. Per Page