Open Access iconOpen Access

ARTICLE

DRIVE: Diagnostic Report Integration via VLM and LLM Explanations for Explainable Vehicle Engine Fault Diagnosis

Jaeseung Lee1, Jehyeok Rew2,*

1 School of Electrical Engineering, Korea University, Seoul, Republic of Korea
2 Department of Data Science, Duksung Women’s University, Seoul, Republic of Korea

* Corresponding Author: Jehyeok Rew. Email: email

Computer Modeling in Engineering & Sciences 2026, 147(1), 22 https://doi.org/10.32604/cmes.2026.076888

Abstract

The engine serves as the primary component that generates power and drives vehicle movement. Given its critical role, accurately diagnosing engine faults is essential for ensuring vehicle safety and reliability. Recent advances in machine learning (ML) have enabled the development of artificial intelligence (AI)-based diagnostic models with strong predictive performance. However, the lack of transparency in these models constrains user confidence in their diagnostic outcomes. While explainable AI (XAI) methods such as local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) have been introduced to improve interpretability, their reliance on visual outputs requires manual interpretation, which can be inefficient and prone to subjectivity. To address this limitation, we propose DRIVE, a novel method for explainable vehicle engine fault diagnosis. In DRIVE, LIME and SHAP are applied to an ML-based diagnostic model, and their visual outputs are translated into textual explanations using the vision-language models (VLMs). These complementary explanations are then synthesized by a large language model (LLM) into a unified diagnostic report, providing a coherent narrative of the model’s reasoning and emphasizing abnormal input features. Experiments conducted on a publicly available vehicle engine fault dataset demonstrate that DRIVE not only produces accurate and transparent diagnostic rationales but also generates structured reports that enhance usability for domain experts. By integrating multiple XAI methods with multimodal LLMs, DRIVE advances the transparency, trustworthiness, and practicality of AI-driven vehicle engine fault diagnosis.

Keywords

Vehicle engine; fault diagnosis; vision-language model; large language model; explainable artificial intelligence; local interpretable model-agnostic explanations; Shapley additive explanations; energy

Supplementary Material

Supplementary Material File

Cite This Article

APA Style
Lee, J., Rew, J. (2026). DRIVE: Diagnostic Report Integration via VLM and LLM Explanations for Explainable Vehicle Engine Fault Diagnosis. Computer Modeling in Engineering & Sciences, 147(1), 22. https://doi.org/10.32604/cmes.2026.076888
Vancouver Style
Lee J, Rew J. DRIVE: Diagnostic Report Integration via VLM and LLM Explanations for Explainable Vehicle Engine Fault Diagnosis. Comput Model Eng Sci. 2026;147(1):22. https://doi.org/10.32604/cmes.2026.076888
IEEE Style
J. Lee and J. Rew, “DRIVE: Diagnostic Report Integration via VLM and LLM Explanations for Explainable Vehicle Engine Fault Diagnosis,” Comput. Model. Eng. Sci., vol. 147, no. 1, pp. 22, 2026. https://doi.org/10.32604/cmes.2026.076888



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1035

    View

  • 719

    Download

  • 0

    Like

Share Link