Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (10)
  • Open Access

    ARTICLE

    Explainable AI and Interpretable Model for Insurance Premium Prediction

    Umar Abdulkadir Isa*, Anil Fernando*

    Journal on Artificial Intelligence, Vol.5, pp. 31-42, 2023, DOI:10.32604/jai.2023.040213

    Abstract Traditional machine learning metrics (TMLMs) are quite useful for the current research work precision, recall, accuracy, MSE and RMSE. Not enough for a practitioner to be confident about the performance and dependability of innovative interpretable model 85%–92%. We included in the prediction process, machine learning models (MLMs) with greater than 99% accuracy with a sensitivity of 95%–98% and specifically in the database. We need to explain the model to domain specialists through the MLMs. Human-understandable explanations in addition to ML professionals must establish trust in the prediction of our model. This is achieved by creating a model-independent, locally accurate explanation… More >

  • Open Access

    ARTICLE

    Safety Assessment of Liquid Launch Vehicle Structures Based on Interpretable Belief Rule Base

    Gang Xiang1,2, Xiaoyu Cheng3, Wei He3,4,*, Peng Han3

    Computer Systems Science and Engineering, Vol.47, No.1, pp. 273-298, 2023, DOI:10.32604/csse.2023.037892

    Abstract A liquid launch vehicle is an important carrier in aviation, and its regular operation is essential to maintain space security. In the safety assessment of fluid launch vehicle body structure, it is necessary to ensure that the assessment model can learn self-response rules from various uncertain data and not differently to provide a traceable and interpretable assessment process. Therefore, a belief rule base with interpretability (BRB-i) assessment method of liquid launch vehicle structure safety status combines data and knowledge. Moreover, an innovative whale optimization algorithm with interpretable constraints is proposed. The experiments are carried out based on the liquid launch… More >

  • Open Access

    ARTICLE

    A Novel Computationally Efficient Approach to Identify Visually Interpretable Medical Conditions from 2D Skeletal Data

    Praveen Jesudhas1,*, T. Raghuveera2

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 2995-3015, 2023, DOI:10.32604/csse.2023.036778

    Abstract Timely identification and treatment of medical conditions could facilitate faster recovery and better health. Existing systems address this issue using custom-built sensors, which are invasive and difficult to generalize. A low-complexity scalable process is proposed to detect and identify medical conditions from 2D skeletal movements on video feed data. Minimal set of features relevant to distinguish medical conditions: AMF, PVF and GDF are derived from skeletal data on sampled frames across the entire action. The AMF (angular motion features) are derived to capture the angular motion of limbs during a specific action. The relative position of joints is represented by… More >

  • Open Access

    ARTICLE

    A Processor Performance Prediction Method Based on Interpretable Hierarchical Belief Rule Base and Sensitivity Analysis

    Chen Wei-wei1, He Wei1,2,*, Zhu Hai-long1, Zhou Guo-hui1, Mu Quan-qi1, Han Peng1

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 6119-6143, 2023, DOI:10.32604/cmc.2023.035743

    Abstract The prediction of processor performance has important reference significance for future processors. Both the accuracy and rationality of the prediction results are required. The hierarchical belief rule base (HBRB) can initially provide a solution to low prediction accuracy. However, the interpretability of the model and the traceability of the results still warrant further investigation. Therefore, a processor performance prediction method based on interpretable hierarchical belief rule base (HBRB-I) and global sensitivity analysis (GSA) is proposed. The method can yield more reliable prediction results. Evidence reasoning (ER) is firstly used to evaluate the historical data of the processor, followed by a… More >

  • Open Access

    ARTICLE

    An Interpretable CNN for the Segmentation of the Left Ventricle in Cardiac MRI by Real-Time Visualization

    Jun Liu1, Geng Yuan2, Changdi Yang2, Houbing Song3, Liang Luo4,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.135, No.2, pp. 1571-1587, 2023, DOI:10.32604/cmes.2022.023195

    Abstract The interpretability of deep learning models has emerged as a compelling area in artificial intelligence research. The safety criteria for medical imaging are highly stringent, and models are required for an explanation. However, existing convolutional neural network solutions for left ventricular segmentation are viewed in terms of inputs and outputs. Thus, the interpretability of CNNs has come into the spotlight. Since medical imaging data are limited, many methods to fine-tune medical imaging models that are popular in transfer models have been built using massive public ImageNet datasets by the transfer learning method. Unfortunately, this generates many unreliable parameters and makes… More >

  • Open Access

    ARTICLE

    Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods

    Wahidul Hasan Abir1, Faria Rahman Khanam1, Kazi Nabiul Alam1, Myriam Hadjouni2, Hela Elmannai3, Sami Bourouis4, Rajesh Dey5, Mohammad Monirujjaman Khan1,*

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2151-2169, 2023, DOI:10.32604/iasc.2023.029653

    Abstract Nowadays, deepfake is wreaking havoc on society. Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos. Although visual media manipulations are not new, the introduction of deepfakes has marked a breakthrough in creating fake media and information. These manipulated pictures and videos will undoubtedly have an enormous societal impact. Deepfake uses the latest technology like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human… More >

  • Open Access

    ARTICLE

    An Interpretable Artificial Intelligence Based Smart Agriculture System

    Fariza Sabrina1,*, Shaleeza Sohail2, Farnaz Farid3, Sayka Jahan4, Farhad Ahamed5, Steven Gordon6

    CMC-Computers, Materials & Continua, Vol.72, No.2, pp. 3777-3797, 2022, DOI:10.32604/cmc.2022.026363

    Abstract With increasing world population the demand of food production has increased exponentially. Internet of Things (IoT) based smart agriculture system can play a vital role in optimising crop yield by managing crop requirements in real-time. Interpretability can be an important factor to make such systems trusted and easily adopted by farmers. In this paper, we propose a novel artificial intelligence-based agriculture system that uses IoT data to monitor the environment and alerts farmers to take the required actions for maintaining ideal conditions for crop production. The strength of the proposed system is in its interpretability which makes it easy for… More >

  • Open Access

    ARTICLE

    Interpretable and Adaptable Early Warning Learning Analytics Model

    Shaleeza Sohail1, Atif Alvi2,*, Aasia Khanum3

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3211-3225, 2022, DOI:10.32604/cmc.2022.023560

    Abstract Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain. Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions. Recently, some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified. However, adaptability is not specifically considered in this domain. This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these… More >

  • Open Access

    ARTICLE

    Bayesian Rule Modeling for Interpretable Mortality Classification of COVID-19 Patients

    Jiyoung Yun, Mainak Basak, Myung-Mook Han*

    CMC-Computers, Materials & Continua, Vol.69, No.3, pp. 2827-2843, 2021, DOI:10.32604/cmc.2021.017266

    Abstract Coronavirus disease 2019 (COVID-19) has been termed a “Pandemic Disease” that has infected many people and caused many deaths on a nearly unprecedented level. As more people are infected each day, it continues to pose a serious threat to humanity worldwide. As a result, healthcare systems around the world are facing a shortage of medical space such as wards and sickbeds. In most cases, healthy people experience tolerable symptoms if they are infected. However, in other cases, patients may suffer severe symptoms and require treatment in an intensive care unit. Thus, hospitals should select patients who have a high risk… More >

  • Open Access

    ARTICLE

    Knowledge Graph Representation Reasoning for Recommendation System

    Tao Li, Hao Li*, Sheng Zhong, Yan Kang, Yachuan Zhang, Rongjing Bu, Yang Hu

    Journal of New Media, Vol.2, No.1, pp. 21-30, 2020, DOI:10.32604/jnm.2020.09767

    Abstract In view of the low interpretability of existing collaborative filtering recommendation algorithms and the difficulty of extracting information from content-based recommendation algorithms, we propose an efficient KGRS model. KGRS first obtains reasoning paths of knowledge graph and embeds the entities of paths into vectors based on knowledge representation learning TransD algorithm, then uses LSTM and soft attention mechanism to capture the semantic of each path reasoning, then uses convolution operation and pooling operation to distinguish the importance of different paths reasoning. Finally, through the full connection layer and sigmoid function to get the prediction ratings, and the items are sorted… More >

Displaying 1-10 on page 1 of 10. Per Page