Home / Journals / CMES / Online First / doi:10.32604/cmes.2025.074627
Special Issues
Table of Content

Open Access

ARTICLE

Explainable Ensemble Learning Framework for Early Detection of Autism Spectrum Disorder: Enhancing Trust, Interpretability and Reliability in AI-Driven Healthcare

Menwa Alshammeri1,2,*, Noshina Tariq3, N. Z. Jhanji4,5, Mamoona Humayun6, Muhammad Attique Khan7
1 Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka, 72388, Saudi Arabia
2 King Salman Center for Disability Research, Riyadh, 11614, Saudi Arabia
3 Department of Artificial Intelligence and Data Science, National University of Computer and Emerging Sciences, Islamabad, 44000, Pakistan
4 School of Computer Science, Taylor’s University, Subang Jaya, 47500, Malaysia
5 Office of Research and Development, Asia University, Taichung, 413305, Taiwan
6 School of Computing, Engineering and the Built Environment, University of Roehampton, London, SW155PJ, UK
7 Center of Artificial Intelligence, Prince Mohammad bin Fahd University, Alkhobar, 31952, Saudi Arabia
* Corresponding Author: Menwa Alshammeri. Email: mhalshammeri@ju.edu.sa
(This article belongs to the Special Issue: Artificial Intelligence Models in Healthcare: Challenges, Methods, and Applications)

Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2025.074627

Received 15 October 2025; Accepted 09 December 2025; Published online 30 December 2025

Abstract

Artificial Intelligence (AI) is changing healthcare by helping with diagnosis. However, for doctors to trust AI tools, they need to be both accurate and easy to understand. In this study, we created a new machine learning system for the early detection of Autism Spectrum Disorder (ASD) in children. Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning. For this, we combined several different models, including Random Forest, XGBoost, and Neural Networks, into a single, more powerful framework. We used two different types of datasets: (i) a standard behavioral dataset and (ii) a more complex multimodal dataset with images, audio, and physiological information. The datasets were carefully preprocessed for missing values, redundant features, and dataset imbalance to ensure fair learning. The results outperformed the state-of-the-art with a Regularized Neural Network, achieving 97.6% accuracy on behavioral data. Whereas, on the multimodal data, the accuracy is 98.2%. Other models also did well with accuracies consistently above 96%. We also used SHAP and LIME on a behavioral dataset for models’ explainability.

Keywords

Autism spectrum disorder (ASD); artificial intelligence in healthcare; explainable AI (XAI); ensemble learning; machine learning; early diagnosis; model interpretability; SHAP; LIME; predictive analytics; ethical AI; healthcare trustworthiness
  • 237

    View

  • 42

    Download

  • 0

    Like

Share Link