Open Access iconOpen Access

ARTICLE

Calibrating Trust in Generative Artificial Intelligence: A Human-Centered Testing Framework with Adaptive Explainability

Sewwandi Tennakoon1, Eric Danso1, Zhenjie Zhao2,*

1 School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, 210044, China
2 School of Aritificial Intellence, Nankai University, Tianjin, 300350, China

* Corresponding Author: Zhenjie Zhao. Email: email

Journal on Artificial Intelligence 2025, 7, 517-547. https://doi.org/10.32604/jai.2025.072628

Abstract

Generative Artificial Intelligence (GenAI) systems have achieved remarkable capabilities across text, code, and image generation; however, their outputs remain prone to errors, hallucinations, and biases. Users often overtrust these outputs due to limited transparency, which can lead to misuse and decision errors. This study addresses the challenge of calibrating trust in GenAI through a human centered testing framework enhanced with adaptive explainability. We introduce a methodology that adjusts explanations dynamically according to user expertise, model output confidence, and contextual risk factors, providing guidance that is informative but not overwhelming. The framework was evaluated using outputs from OpenAI’s Generative Pretrained Transformer 4 (GPT-4) for text and code generation and Stable Diffusion, a deep generative image model, for image synthesis. The evaluation covered text, code, and visual modalities. A dataset of 5000 GenAI outputs was created and reviewed by a diverse participant group of 360 individuals categorized by expertise level. Results show that adaptive explanations improve error detection rates, reduce the mean squared trust calibration error, and maintain efficient decision making compared with both static and no explanation conditions. The framework increased error detection by up to 16% across expertise levels, a gain that can provide practical benefits in high stakes fields. For example, in healthcare it may help identify diagnostic errors earlier, and in law it may prevent reliance on flawed evidence in judicial work. These improvements highlight the framework’s potential to make Artificial Intelligence (AI) deployment safer and more accountable. Visual analyses, including trust accuracy plots, reliability diagrams, and misconception maps, show that the adaptive approach reduces overtrust and reveals patterns of misunderstanding across modalities. Statistical results confirm the robustness of these findings across novice, intermediate, and expert users. The study offers insights for designing explanations that balance completeness and simplicity to improve trust calibration and cognitive load. The approach has implications for safe and transparent GenAI deployment and can inform both AI interface design and policy development for responsible AI use.

Keywords

Generative AI; trust calibration; human-centered testing; adaptive explainability; user-centered AI; model reliability; human–AI collaboration

Cite This Article

APA Style
Tennakoon, S., Danso, E., Zhao, Z. (2025). Calibrating Trust in Generative Artificial Intelligence: A Human-Centered Testing Framework with Adaptive Explainability. Journal on Artificial Intelligence, 7(1), 517–547. https://doi.org/10.32604/jai.2025.072628
Vancouver Style
Tennakoon S, Danso E, Zhao Z. Calibrating Trust in Generative Artificial Intelligence: A Human-Centered Testing Framework with Adaptive Explainability. J Artif Intell. 2025;7(1):517–547. https://doi.org/10.32604/jai.2025.072628
IEEE Style
S. Tennakoon, E. Danso, and Z. Zhao, “Calibrating Trust in Generative Artificial Intelligence: A Human-Centered Testing Framework with Adaptive Explainability,” J. Artif. Intell., vol. 7, no. 1, pp. 517–547, 2025. https://doi.org/10.32604/jai.2025.072628



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 928

    View

  • 462

    Download

  • 0

    Like

Share Link