Special Issues
Table of Content

Bridging the Gap: AutoML and Explainable AI for Industrial and Healthcare Innovations

Submission Deadline: 15 December 2025 (closed) View: 3624 Submit to Journal

Guest Editors

Prof. Dr. Jihoon Moon

Email: jmoon25@duksung.ac.kr

Affiliation: Department of Data Science, Duksung Women's University, 33 Samyang-ro 144-gil, Dobong-gu, Seoul 01369, Republic of Korea

Homepage:

Research Interests: autoML, explainable AI (XAI), EDA & XAI integration, deep learning, AI-driven industrial analytics

图片1.png


Prof. Dr. Jehyeok Rew

Email: jhrew@duksung.ac.kr

Affiliation: Department of Data Science, Duksung Women's University, 33 Samyang-ro 144-gil, Dobong-gu, Seoul 01369, Republic of Korea

Homepage:

Research Interests: explainable AI, AI for industrial automation, AI-driven decision systems, computer vision, AI in bioinformatics

图片2.png


Prof. Dr. Hyeonwoo Kim

Email: hwkim24@sch.ac.kr

Affiliation: Department of Computer Science and Engineering, Soonchunhyang University, Asan 31538, Republic of Korea

Homepage:

Research Interests: autoML, AI-driven data science, Python & R in industry, AI-driven cybersecurity, data engineering

图片3.png


Summary

The rapid growth of artificial intelligence (AI) has led to increased interest in automated machine learning (AutoML) and explainable AI (XAI). As AI systems become more complex, the ability to automate machine learning processes while ensuring transparency and interpretability is crucial for industrial and healthcare applications. This Special Issue explores cutting-edge research in AutoML frameworks, the role of XAI in making AI models more interpretable, and the integration of exploratory data analysis (EDA) with explainability techniques.

This Special Issue invites high-quality research contributions that address the challenges of automated AI model selection, feature engineering, and hyperparameter tuning in real-world applications. Additionally, we encourage research on best practices for Python and R in industrial and healthcare settings, enabling AI practitioners to streamline workflows and enhance reproducibility. Studies on AI-driven decision-making, industry-specific applications of XAI, and methodologies for transforming beginners into expert data scientists are also welcome.

Suggested Themes:
· Advancements in AutoML & Automated Feature Engineering for Scalable AI
· Explainable AI (XAI) for Transparent and Trustworthy AI Systems
· EDA vs. XAI: A Comparative Study on Model Interpretability and Data Exploration
· AI-driven Decision-Making in Industry & Healthcare with Enhanced Explainability
· Python & R for AI-driven Industrial and Medical Applications
· AI-driven Cybersecurity & Risk Mitigation with Explainable Approaches
· Human-in-the-Loop AI Systems for Adaptive and Interpretable Learning
· AI for Personalized Healthcare & Precision Medicine through Transparent AI
· Automated & Trustworthy AI: Bridging AutoML and Explainability
· AI for Industry & Society: Ensuring Automation with Interpretability
· AI for Smart & Sustainable Systems with Adaptive and Explainable Models
· AI for Science & Multimodal Data: Advancing Discovery through AutoML and XAI


Keywords

AutoML, Explainable AI, XAI, Exploratory Data Analysis (EDA), Python for AI, R for AI, AI-driven Industrial Analytics, Trustworthy AI, AI-driven Decision-Making, Healthcare AI

Published Papers


  • Open Access

    ARTICLE

    ECSA-Net: A Lightweight Attention-Based Deep Learning Model for Eye Disease Detection

    Sara Tehsin, Muhammad John Abbas, Inzamam Mashood Nasir, Fadwa Alrowais, Reham Abualhamayel, Abdulsamad Ebrahim Yahya, Radwa Marzouk
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2026.076515
    (This article belongs to the Special Issue: Bridging the Gap: AutoML and Explainable AI for Industrial and Healthcare Innovations)
    Abstract Globally, diabetes and glaucoma account for a high number of people suffering from severe vision loss and blindness. To treat these vision disorders effectively, proper diagnosis must occur in a timely manner, and with conventional methods such as fundus photography, optical coherence tomography (OCT), and slit-lamp imaging, much depends on an expert’s interpretation of the images, making the systems very labor-intensive to operate. Moreover, clinical settings face difficulties with inter-observer variability and limited scalability with these diagnostic devices. To solve these problems, we have developed the Efficient Channel-Spatial Attention Network (ECSA-Net), a new deep learning-based… More >

  • Open Access

    ARTICLE

    From Hardening to Understanding: Adversarial Training vs. CF-Aug for Explainable Cyber-Threat Detection System

    Malik Al-Essa, Mohammad Qatawneh, Ahmad Sami Al-Shamayleh, Orieb Abualghanam, Wesam Almobaideen
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2026.076608
    (This article belongs to the Special Issue: Bridging the Gap: AutoML and Explainable AI for Industrial and Healthcare Innovations)
    Abstract Machine Learning (ML) intrusion detection systems (IDS) are vulnerable to manipulations: small, protocol-valid manipulations can push samples across brittle decision boundaries. We study two complementary remedies that reshape the learner in distinct ways. Adversarial Training (AT) exposes the model to worst-case, in-threat perturbations during learning to thicken local margins; Counterfactual Augmentation (CF-Aug) adds near-boundary exemplars that are explicitly constrained to be feasible, causally consistent, and operationally meaningful for defenders. The main goal of this work is to investigate and compare how AT and CF-Aug can reshape the decision surface of the IDS. eXplainable Artificial Intelligence More >

  • Open Access

    ARTICLE

    CARE: Comprehensive Artificial Intelligence Techniques for Reliable Autism Evaluation in Pediatric Care

    Jihoon Moon, Jiyoung Woo
    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1383-1425, 2025, DOI:10.32604/cmc.2025.067784
    (This article belongs to the Special Issue: Bridging the Gap: AutoML and Explainable AI for Industrial and Healthcare Innovations)
    Abstract Improving early diagnosis of autism spectrum disorder (ASD) in children increasingly relies on predictive models that are reliable and accessible to non-experts. This study aims to develop such models using Python-based tools to improve ASD diagnosis in clinical settings. We performed exploratory data analysis to ensure data quality and identify key patterns in pediatric ASD data. We selected the categorical boosting (CatBoost) algorithm to effectively handle the large number of categorical variables. We used the PyCaret automated machine learning (AutoML) tool to make the models user-friendly for clinicians without extensive machine learning expertise. In addition,… More >

Share Link