DRIVE: Diagnostic Report Integration via VLM and LLM Explanations for Explainable Vehicle Engine Fault Diagnosis
Jaeseung Lee1, Jehyeok Rew2,*
1 School of Electrical Engineering, Korea University, Seoul, Republic of Korea
2 Department of Data Science, Duksung Women’s University, Seoul, Republic of Korea
* Corresponding Author: Jehyeok Rew. Email:
(This article belongs to the Special Issue: Deep Learning for Energy Systems)
Computer Modeling in Engineering & Sciences https://doi.org/10.32604/cmes.2026.076888
Received 28 November 2025; Accepted 08 January 2026; Published online 03 April 2026
Abstract
The engine serves as the primary component that generates power and drives vehicle movement. Given its critical role, accurately diagnosing engine faults is essential for ensuring vehicle safety and reliability. Recent advances in machine learning (ML) have enabled the development of artificial intelligence (AI)-based diagnostic models with strong predictive performance. However, the lack of transparency in these models constrains user confidence in their diagnostic outcomes. While explainable AI (XAI) methods such as local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) have been introduced to improve interpretability, their reliance on visual outputs requires manual interpretation, which can be inefficient and prone to subjectivity. To address this limitation, we propose DRIVE, a novel method for explainable vehicle engine fault diagnosis. In DRIVE, LIME and SHAP are applied to an ML-based diagnostic model, and their visual outputs are translated into textual explanations using the vision-language models (VLMs). These complementary explanations are then synthesized by a large language model (LLM) into a unified diagnostic report, providing a coherent narrative of the model’s reasoning and emphasizing abnormal input features. Experiments conducted on a publicly available vehicle engine fault dataset demonstrate that DRIVE not only produces accurate and transparent diagnostic rationales but also generates structured reports that enhance usability for domain experts. By integrating multiple XAI methods with multimodal LLMs, DRIVE advances the transparency, trustworthiness, and practicality of AI-driven vehicle engine fault diagnosis.
Keywords
Vehicle engine; fault diagnosis; vision-language model; large language model; explainable artificial intelligence; local interpretable model-agnostic explanations; Shapley additive explanations; energy