Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (16)
  • Open Access

    ARTICLE

    Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems

    Rabia Abid1, Muhammad Rizwan2, Abdulatif Alabdulatif3,*, Abdullah Alnajim4, Meznah Alamro5, Mourade Azrour6

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3413-3429, 2024, DOI:10.32604/cmc.2024.046880

    Abstract Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the security of e-healthcare systems. The… More >

  • Open Access

    ARTICLE

    Explainable Classification Model for Android Malware Analysis Using API and Permission-Based Features

    Nida Aslam1,*, Irfan Ullah Khan2, Salma Abdulrahman Bader2, Aisha Alansari3, Lama Abdullah Alaqeel2, Razan Mohammed Khormy2, Zahra Abdultawab AlKubaish2, Tariq Hussain4,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3167-3188, 2023, DOI:10.32604/cmc.2023.039721

    Abstract One of the most widely used smartphone operating systems, Android, is vulnerable to cutting-edge malware that employs sophisticated logic. Such malware attacks could lead to the execution of unauthorized acts on the victims’ devices, stealing personal information and causing hardware damage. In previous studies, machine learning (ML) has shown its efficacy in detecting malware events and classifying their types. However, attackers are continuously developing more sophisticated methods to bypass detection. Therefore, up-to-date datasets must be utilized to implement proactive models for detecting malware events in Android mobile devices. Therefore, this study employed ML algorithms to classify Android applications into malware… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments

    Yongsoo Lee, Yeeun Lee, Eungyu Lee, Taejin Lee*

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1701-1719, 2023, DOI:10.32604/cmc.2023.040235

    Abstract Cybersecurity increasingly relies on machine learning (ML) models to respond to and detect attacks. However, the rapidly changing data environment makes model life-cycle management after deployment essential. Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models. However, detecting drift in unsupervised environments can be challenging. This study introduces a novel approach leveraging Shapley additive explanations (SHAP), a widely recognized explainability technique in ML, to address drift detection in unsupervised settings. The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a drift suspicion metric that considers… More >

  • Open Access

    ARTICLE

    XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly

    Yuna Han1, Hangbae Chang2,*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 221-237, 2023, DOI:10.32604/cmc.2023.039463

    Abstract Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous detection model that dynamically trains… More >

  • Open Access

    ARTICLE

    Efficient Explanation and Evaluation Methodology Based on Hybrid Feature Dropout

    Jingang Kim, Suengbum Lim, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 471-490, 2023, DOI:10.32604/csse.2023.038413

    Abstract AI-related research is conducted in various ways, but the reliability of AI prediction results is currently insufficient, so expert decisions are indispensable for tasks that require essential decision-making. XAI (eXplainable AI) is studied to improve the reliability of AI. However, each XAI methodology shows different results in the same data set and exact model. This means that XAI results must be given meaning, and a lot of noise value emerges. This paper proposes the HFD (Hybrid Feature Dropout)-based XAI and evaluation methodology. The proposed XAI methodology can mitigate shortcomings, such as incorrect feature weights and impractical feature selection. There are… More >

  • Open Access

    ARTICLE

    Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System

    Reda Salama1, Mahmoud Ragab1,2,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2917-2932, 2023, DOI:10.32604/csse.2023.037016

    Abstract In the Internet of Things (IoT) based system, the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems (UCS). The UCS necessitates heterogeneity, management level, and data transmission for distributed users. Simultaneously, security remains a major issue in the IoT-driven UCS. Besides, energy-limited IoT devices need an effective clustering strategy for optimal energy utilization. The recent developments of explainable artificial intelligence (XAI) concepts can be employed to effectively design intrusion detection systems (IDS) for accomplishing security in UCS. In this view, this study designs a novel Blockchain with Explainable Artificial Intelligence… More >

  • Open Access

    ARTICLE

    Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection

    Abdullah M. Basahel, Mohammad Yamin*

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 209-224, 2023, DOI:10.32604/csse.2023.034449

    Abstract Recent advancements in the Internet of Things (Io), 5G networks, and cloud computing (CC) have led to the development of Human-centric IoT (HIoT) applications that transform human physical monitoring based on machine monitoring. The HIoT systems find use in several applications such as smart cities, healthcare, transportation, etc. Besides, the HIoT system and explainable artificial intelligence (XAI) tools can be deployed in the healthcare sector for effective decision-making. The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage. This article presents a new quantum-inspired differential evolution… More >

  • Open Access

    REVIEW

    Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review

    Nilkanth Mukund Deshpande1,2, Shilpa Gite6,7,*, Biswajeet Pradhan3,4,5, Mazen Ebraheem Assiri4

    CMES-Computer Modeling in Engineering & Sciences, Vol.133, No.3, pp. 843-872, 2022, DOI:10.32604/cmes.2022.021225

    Abstract Machine learning (ML) has emerged as a critical enabling tool in the sciences and industry in recent years. Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks–thanks to advancements in technique, the availability of enormous databases, and improved computing power. Deep learning models are at the forefront of this advancement. However, because of their nested nonlinear structure, these strong models are termed as “black boxes,” as they provide no information about how they arrive at their conclusions. Such a lack of transparencies may be unacceptable in many applications, such as the medical domain. A… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human… More >

  • Open Access

    ARTICLE

    Explainable Software Fault Localization Model: From Blackbox to Whitebox

    Abdulaziz Alhumam*

    CMC-Computers, Materials & Continua, Vol.73, No.1, pp. 1463-1482, 2022, DOI:10.32604/cmc.2022.029473

    Abstract The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets. Plenty of machine intelligence models has offered the effective localization of defects. Some models can precisely locate the faulty with more than 95% accuracy, resulting in demand for trustworthy models in fault localization. Confidence and trustworthiness within machine intelligence-based software models can only be achieved via explainable artificial intelligence in Fault Localization (XFL). The current study presents a model for generating counterfactual interpretations for the fault localization model's decisions. Neural system approximations and disseminated presentation of… More >

Displaying 1-10 on page 1 of 16. Per Page