Special Issues

Open and Interpretable AI-ML in Human Health and Healthcare

Submission Deadline: 10 March 2023 (closed) View: 88

Guest Editors

Prof. Mohammed Al-Shehri, Majmaah University, Saudi Arabia
Piyush Kumar Shukla, University Institute of Technology, India
Dr. Manoj Kumar, University of Wollongong, United Arab Emirates
Rahim Mutlu, University of Wollongong, Australia

Summary

Artificial Intelligence (AI) and Machine Learning (ML) developments are having a big impact on people's lives, affecting millions of people's health, safety, education, and other opportunities that are meant to be shared by everyone. AI is ushering in a paradigm shift in healthcare, owing to the growing availability of structured and unstructured data and the rapid development of big data analytic methods. These clinical data can take the form of demographics, medical notes, electronic recordings from medical devices (sensors), physical examinations, clinical laboratory results, and photographs, among other things. ML algorithms are already allowing humans to gain unprecedented insights into diagnosing diseases based on histopathological examination or medical imaging, detecting malignant tumors in radiological images, detecting malignancy from photographs of skin lesions, discovering new drugs, determining treatment variability and patient outcomes, and guiding researchers on how to construct cohorts for co-registration.
Existing deep learning models are less interpretable, i.e., they don't provide explanations or make reliable predictions. There are also a number of other obstacles, such as ethical, legal, societal, and technological issues with current AI. AI technologies based on Deep Learning (DL) that are trustworthy and explainable are an emerging topic of research with a lot of promise for improving high-quality healthcare. It refers to AI/DL tools and approaches that offer human-comprehensible answers, such as explanations and interpretations for disease diagnosis and forecasts, as well as suggested actions. 
This special issue invites and encourages new studies and research on interpretable AI-ML approaches for producing human-readable explanations. The major research investigations invited under this special issue are having following goals:
• To improve trust and minimize analysis bias.
• To promote discussion on system designs.
• To employ and assess novel explainable AI to improve the accuracy of pathology workflows for disease diagnostic and prognostic purposes.
• Researchers working on practical use cases of trustworthy AI models are invited to explain how to add a layer of interpretability and trust to powerful algorithms like neural networks and ensemble methods for delivering near real-time intelligence.


Keywords

Potential topics to be covered:
The special issue will highlight, but not be limited to, the following topics:
• Emerging AI-ML for the analysis of digitized pathology image
• Trustworthy AI in computational pathology
• Explainable AI for computational pathology
• Explainable AI for whole slide image (WSI) analysis
• Advanced AI for WSIs Registration
• AI-based systems using human-interpretable image features (HIFs) for improved clinical outcomes
• Human-level explainable AI
• Detection and discovery of predictive and prognostic tissue biomarkers
• Histopathologic biomarker assessment using advanced AI systems for accurate, personalized medicine
• AI-assisted computational pathology for cancer diagnosis
• Immunohistochemistry scoring
• Interpretable deep learning and human-understandable machine learning
• Trust and interpretability
• Theoretical aspects of explanation and interpretability in AI

Share Link