Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (11)
  • Open Access


    Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System

    Reda Salama1, Mahmoud Ragab1,2,*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2917-2932, 2023, DOI:10.32604/csse.2023.037016

    Abstract In the Internet of Things (IoT) based system, the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems (UCS). The UCS necessitates heterogeneity, management level, and data transmission for distributed users. Simultaneously, security remains a major issue in the IoT-driven UCS. Besides, energy-limited IoT devices need an effective clustering strategy for optimal energy utilization. The recent developments of explainable artificial intelligence (XAI) concepts can be employed to effectively design intrusion detection systems (IDS) for accomplishing security in UCS. In this view, this study designs a novel Blockchain with Explainable Artificial Intelligence… More >

  • Open Access


    Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection

    Abdullah M. Basahel, Mohammad Yamin*

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 209-224, 2023, DOI:10.32604/csse.2023.034449

    Abstract Recent advancements in the Internet of Things (Io), 5G networks, and cloud computing (CC) have led to the development of Human-centric IoT (HIoT) applications that transform human physical monitoring based on machine monitoring. The HIoT systems find use in several applications such as smart cities, healthcare, transportation, etc. Besides, the HIoT system and explainable artificial intelligence (XAI) tools can be deployed in the healthcare sector for effective decision-making. The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage. This article presents a new quantum-inspired differential evolution… More >

  • Open Access


    Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review

    Nilkanth Mukund Deshpande1,2, Shilpa Gite6,7,*, Biswajeet Pradhan3,4,5, Mazen Ebraheem Assiri4

    CMES-Computer Modeling in Engineering & Sciences, Vol.133, No.3, pp. 843-872, 2022, DOI:10.32604/cmes.2022.021225

    Abstract Machine learning (ML) has emerged as a critical enabling tool in the sciences and industry in recent years. Today’s machine learning algorithms can achieve outstanding performance on an expanding variety of complex tasks–thanks to advancements in technique, the availability of enormous databases, and improved computing power. Deep learning models are at the forefront of this advancement. However, because of their nested nonlinear structure, these strong models are termed as “black boxes,” as they provide no information about how they arrive at their conclusions. Such a lack of transparencies may be unacceptable in many applications, such as the medical domain. A… More >

  • Open Access


    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human… More >

  • Open Access


    Explainable Software Fault Localization Model: From Blackbox to Whitebox

    Abdulaziz Alhumam*

    CMC-Computers, Materials & Continua, Vol.73, No.1, pp. 1463-1482, 2022, DOI:10.32604/cmc.2022.029473

    Abstract The most resource-intensive and laborious part of debugging is finding the exact location of the fault from the more significant number of code snippets. Plenty of machine intelligence models has offered the effective localization of defects. Some models can precisely locate the faulty with more than 95% accuracy, resulting in demand for trustworthy models in fault localization. Confidence and trustworthiness within machine intelligence-based software models can only be achieved via explainable artificial intelligence in Fault Localization (XFL). The current study presents a model for generating counterfactual interpretations for the fault localization model's decisions. Neural system approximations and disseminated presentation of… More >

  • Open Access


    Optimal Machine Learning Enabled Intrusion Detection in Cyber-Physical System Environment

    Bassam A. Y. Alqaralleh1,*, Fahad Aldhaban1, Esam A. AlQarallehs2, Ahmad H. Al-Omari3

    CMC-Computers, Materials & Continua, Vol.72, No.3, pp. 4691-4707, 2022, DOI:10.32604/cmc.2022.026556

    Abstract Cyber-attacks on cyber-physical systems (CPSs) resulted to sensing and actuation misbehavior, severe damage to physical object, and safety risk. Machine learning (ML) models have been presented to hinder cyberattacks on the CPS environment; however, the non-existence of labelled data from new attacks makes their detection quite interesting. Intrusion Detection System (IDS) is a commonly utilized to detect and classify the existence of intrusions in the CPS environment, which acts as an important part in secure CPS environment. Latest developments in deep learning (DL) and explainable artificial intelligence (XAI) stimulate new IDSs to manage cyberattacks with minimum complexity and high sophistication.… More >

  • Open Access


    An Interpretable Artificial Intelligence Based Smart Agriculture System

    Fariza Sabrina1,*, Shaleeza Sohail2, Farnaz Farid3, Sayka Jahan4, Farhad Ahamed5, Steven Gordon6

    CMC-Computers, Materials & Continua, Vol.72, No.2, pp. 3777-3797, 2022, DOI:10.32604/cmc.2022.026363

    Abstract With increasing world population the demand of food production has increased exponentially. Internet of Things (IoT) based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time. Interpretability can be an important factor to make such systems trusted and easily adopted by farmers. In this paper, we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production. The strength of the proposed system is in its interpretability which makes it easy for… More >

  • Open Access


    Explainable Artificial Intelligence Solution for Online Retail

    Kumail Javaid1, Ayesha Siddiqa2, Syed Abbas Zilqurnain Naqvi2, Allah Ditta3, Muhammad Ahsan2, M. A. Khan4, Tariq Mahmood5, Muhammad Adnan Khan6,*

    CMC-Computers, Materials & Continua, Vol.71, No.3, pp. 4425-4442, 2022, DOI:10.32604/cmc.2022.022984

    Abstract Artificial intelligence (AI) and machine learning (ML) help in making predictions and businesses to make key decisions that are beneficial for them. In the case of the online shopping business, it’s very important to find trends in the data and get knowledge of features that helps drive the success of the business. In this research, a dataset of 12,330 records of customers has been analyzed who visited an online shopping website over a period of one year. The main objective of this research is to find features that are relevant in terms of correctly predicting the purchasing decisions made by… More >

  • Open Access


    Interpretable and Adaptable Early Warning Learning Analytics Model

    Shaleeza Sohail1, Atif Alvi2,*, Aasia Khanum3

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3211-3225, 2022, DOI:10.32604/cmc.2022.023560

    Abstract Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain. Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions. Recently, some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified. However, adaptability is not specifically considered in this domain. This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these… More >

  • Open Access


    Modeling of Explainable Artificial Intelligence for Biomedical Mental Disorder Diagnosis

    Anwer Mustafa Hilal1, Imène ISSAOUI2, Marwa Obayya3, Fahd N. Al-Wesabi4, Nadhem NEMRI5, Manar Ahmed Hamza1,*, Mesfer Al Duhayyim6, Abu Sarwar Zamani1

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3853-3867, 2022, DOI:10.32604/cmc.2022.022663

    Abstract The abundant existence of both structured and unstructured data and rapid advancement of statistical models stressed the importance of introducing Explainable Artificial Intelligence (XAI), a process that explains how prediction is done in AI models. Biomedical mental disorder, i.e., Autism Spectral Disorder (ASD) needs to be identified and classified at early stage itself in order to reduce health crisis. With this background, the current paper presents XAI-based ASD diagnosis (XAI-ASD) model to detect and classify ASD precisely. The proposed XAI-ASD technique involves the design of Bacterial Foraging Optimization (BFO)-based Feature Selection (FS) technique. In addition, Whale Optimization Algorithm (WOA) with… More >

Displaying 1-10 on page 1 of 11. Per Page  

Share Link