Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (25)
  • Open Access

    ARTICLE

    CARE: Comprehensive Artificial Intelligence Techniques for Reliable Autism Evaluation in Pediatric Care

    Jihoon Moon1, Jiyoung Woo2,*

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1383-1425, 2025, DOI:10.32604/cmc.2025.067784 - 29 August 2025

    Abstract Improving early diagnosis of autism spectrum disorder (ASD) in children increasingly relies on predictive models that are reliable and accessible to non-experts. This study aims to develop such models using Python-based tools to improve ASD diagnosis in clinical settings. We performed exploratory data analysis to ensure data quality and identify key patterns in pediatric ASD data. We selected the categorical boosting (CatBoost) algorithm to effectively handle the large number of categorical variables. We used the PyCaret automated machine learning (AutoML) tool to make the models user-friendly for clinicians without extensive machine learning expertise. In addition,… More >

  • Open Access

    ARTICLE

    An IoT-Enabled Hybrid DRL-XAI Framework for Transparent Urban Water Management

    Qamar H. Naith1,*, H. Mancy2,3

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.1, pp. 387-405, 2025, DOI:10.32604/cmes.2025.066917 - 31 July 2025

    Abstract Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained. With improved control systems in place to check leakage, variability of pressure, and conscientiousness of energy, issues that previously went unnoticed are now becoming recognized. This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning (MADRL) with Shapley Additive Explanations (SHAP)-based Explainable AI (XAI) for adaptive and interpretable water resource management. In the methodology, the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time… More >

  • Open Access

    ARTICLE

    Enhancing Healthcare Data Privacy in Cloud IoT Networks Using Anomaly Detection and Optimization with Explainable AI (ExAI)

    Jitendra Kumar Samriya1, Virendra Singh2, Gourav Bathla3, Meena Malik4, Varsha Arya5,6, Wadee Alhalabi7, Brij B. Gupta8,9,10,11,*

    CMC-Computers, Materials & Continua, Vol.84, No.2, pp. 3893-3910, 2025, DOI:10.32604/cmc.2025.063242 - 03 July 2025

    Abstract The integration of the Internet of Things (IoT) into healthcare systems improves patient care, boosts operational efficiency, and contributes to cost-effective healthcare delivery. However, overcoming several associated challenges, such as data security, interoperability, and ethical concerns, is crucial to realizing the full potential of IoT in healthcare. Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems. In this context, this paper presents a novel method for healthcare data privacy analysis. The technique is based on the identification of anomalies in… More >

  • Open Access

    ARTICLE

    FSFS: A Novel Statistical Approach for Fair and Trustworthy Impactful Feature Selection in Artificial Intelligence Models

    Ali Hamid Farea1,*, Iman Askerzade1,2, Omar H. Alhazmi3, Savaş Takan4

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1457-1484, 2025, DOI:10.32604/cmc.2025.064872 - 09 June 2025

    Abstract Feature selection (FS) is a pivotal pre-processing step in developing data-driven models, influencing reliability, performance and optimization. Although existing FS techniques can yield high-performance metrics for certain models, they do not invariably guarantee the extraction of the most critical or impactful features. Prior literature underscores the significance of equitable FS practices and has proposed diverse methodologies for the identification of appropriate features. However, the challenge of discerning the most relevant and influential features persists, particularly in the context of the exponential growth and heterogeneity of big data—a challenge that is increasingly salient in modern artificial… More >

  • Open Access

    ARTICLE

    A New Cybersecurity Approach Enhanced by xAI-Derived Rules to Improve Network Intrusion Detection and SIEM

    Federica Uccello1,2, Marek Pawlicki3,4, Salvatore D'Antonio1, Rafał Kozik3,4, Michał Choraś3,4,*

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 1607-1621, 2025, DOI:10.32604/cmc.2025.062801 - 16 April 2025

    Abstract The growing sophistication of cyberthreats, among others the Distributed Denial of Service attacks, has exposed limitations in traditional rule-based Security Information and Event Management systems. While machine learning–based intrusion detection systems can capture complex network behaviours, their “black-box” nature often limits trust and actionable insight for security operators. This study introduces a novel approach that integrates Explainable Artificial Intelligence—xAI—with the Random Forest classifier to derive human-interpretable rules, thereby enhancing the detection of Distributed Denial of Service (DDoS) attacks. The proposed framework combines traditional static rule formulation with advanced xAI techniques—SHapley Additive exPlanations and Scoped Rules More >

  • Open Access

    ARTICLE

    Intrumer: A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment

    Nazreen Banu A*, S.K.B. Sangeetha

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 579-607, 2025, DOI:10.32604/cmc.2024.059805 - 03 January 2025

    Abstract The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic. Cloud environments pose significant challenges in maintaining privacy and security. Global approaches, such as IDS, have been developed to tackle these issues. However, most conventional Intrusion Detection System (IDS) models struggle with unseen cyberattacks and complex high-dimensional data. In fact, this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system, named INTRUMER, which offers balanced accuracy, reliability, and security in cloud settings by multiple modules working together within it. The traffic captured… More >

  • Open Access

    ARTICLE

    Modeling and Predictive Analytics of Breast Cancer Using Ensemble Learning Techniques: An Explainable Artificial Intelligence Approach

    Avi Deb Raha1, Fatema Jannat Dihan2, Mrityunjoy Gain1, Saydul Akbar Murad3, Apurba Adhikary2, Md. Bipul Hossain2, Md. Mehedi Hassan1, Taher Al-Shehari4, Nasser A. Alsadhan5, Mohammed Kadrie4, Anupam Kumar Bairagi1,*

    CMC-Computers, Materials & Continua, Vol.81, No.3, pp. 4033-4048, 2024, DOI:10.32604/cmc.2024.057415 - 19 December 2024

    Abstract Breast cancer stands as one of the world’s most perilous and formidable diseases, having recently surpassed lung cancer as the most prevalent cancer type. This disease arises when cells in the breast undergo unregulated proliferation, resulting in the formation of a tumor that has the capacity to invade surrounding tissues. It is not confined to a specific gender; both men and women can be diagnosed with breast cancer, although it is more frequently observed in women. Early detection is pivotal in mitigating its mortality rate. The key to curbing its mortality lies in early detection.… More >

  • Open Access

    ARTICLE

    Explainable Artificial Intelligence (XAI) Model for Cancer Image Classification

    Amit Singhal1, Krishna Kant Agrawal2, Angeles Quezada3, Adrian Rodriguez Aguiñaga4, Samantha Jiménez4, Satya Prakash Yadav5,,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.1, pp. 401-441, 2024, DOI:10.32604/cmes.2024.051363 - 20 August 2024

    Abstract The use of Explainable Artificial Intelligence (XAI) models becomes increasingly important for making decisions in smart healthcare environments. It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms. These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence. Nevertheless, the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images. This research presents an advanced investigation of XAI models to classify cancer images. It describes the different levels of explainability… More >

  • Open Access

    ARTICLE

    MAIPFE: An Efficient Multimodal Approach Integrating Pre-Emptive Analysis, Personalized Feature Selection, and Explainable AI

    Moshe Dayan Sirapangi1, S. Gopikrishnan1,*

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2229-2251, 2024, DOI:10.32604/cmc.2024.047438 - 15 May 2024

    Abstract Medical Internet of Things (IoT) devices are becoming more and more common in healthcare. This has created a huge need for advanced predictive health modeling strategies that can make good use of the growing amount of multimodal data to find potential health risks early and help individuals in a personalized way. Existing methods, while useful, have limitations in predictive accuracy, delay, personalization, and user interpretability, requiring a more comprehensive and efficient approach to harness modern medical IoT devices. MAIPFE is a multimodal approach integrating pre-emptive analysis, personalized feature selection, and explainable AI for real-time health… More >

  • Open Access

    ARTICLE

    Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems

    Rabia Abid1, Muhammad Rizwan2, Abdulatif Alabdulatif3,*, Abdullah Alnajim4, Meznah Alamro5, Mourade Azrour6

    CMC-Computers, Materials & Continua, Vol.78, No.3, pp. 3413-3429, 2024, DOI:10.32604/cmc.2024.046880 - 26 March 2024

    Abstract Explainable Artificial Intelligence (XAI) has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning (ML) and Deep Learning (DL) based algorithms. In this paper, we chose e-healthcare systems for efficient decision-making and data classification, especially in data security, data handling, diagnostics, laboratories, and decision-making. Federated Machine Learning (FML) is a new and advanced technology that helps to maintain privacy for Personal Health Records (PHR) and handle a large amount of medical data effectively. In this context, XAI, along with FML, increases efficiency and improves the More >

Displaying 1-10 on page 1 of 25. Per Page