Open Access iconOpen Access

REVIEW

crossmark

Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies

Shaoping Xiao1,*, Zhaoan Wang1, Junchao Li2, Caden Noeller1, Jiefeng Jiang3, Jun Wang4

1 Department of Mechanical Engineering, Iowa Technology Institute, The University of Iowa, Iowa City, IA 52242, USA
2 Talus Renewables, Inc., Austin, TX 78754, USA
3 Department of Psychological and Brain Sciences, Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
4 Department of Chemical and Biochemical Engineering, Iowa Technology Institute, The University of Iowa, Iowa City, IA 52242, USA

* Corresponding Author: Shaoping Xiao. Email: email

(This article belongs to the Special Issue: Advances in Object Detection: Methods and Applications)

Computers, Materials & Continua 2026, 86(2), 1-62. https://doi.org/10.32604/cmc.2025.072146

Abstract

The integration of human factors into artificial intelligence (AI) systems has emerged as a critical research frontier, particularly in reinforcement learning (RL), where human-AI interaction (HAII) presents both opportunities and challenges. As RL continues to demonstrate remarkable success in model-free and partially observable environments, its real-world deployment increasingly requires effective collaboration with human operators and stakeholders. This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies. We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration: computational trust modeling, system usability, and decision understandability. Our comprehensive review organizes HAII methods into five key categories: (1) learning from human feedback, including various shaping approaches; (2) learning from human demonstration through inverse RL and imitation learning; (3) shared autonomy architectures for dynamic control allocation; (4) human-in-the-loop querying strategies for active learning; and (5) explainable RL techniques for interpretable policy generation. Recent state-of-the-art works are critically reviewed, with particular emphasis on advances incorporating large language models in human-AI interaction research. To illustrate some concepts, we present three detailed case studies: an empirical trust model for farmers adopting AI-driven agricultural management systems, the implementation of ethical constraints in robotic motion planning through human-guided RL, and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm. These applications demonstrate how HAII principles can enhance RL systems’ practical utility while bridging the gap between theoretical RL and real-world human-centered applications, ultimately contributing to more deployable and socially beneficial intelligent systems.

Keywords

Human-AI interaction; reinforcement learning; partially observable environments; trust model; ethical constraints

Cite This Article

APA Style
Xiao, S., Wang, Z., Li, J., Noeller, C., Jiang, J. et al. (2026). Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies. Computers, Materials & Continua, 86(2), 1–62. https://doi.org/10.32604/cmc.2025.072146
Vancouver Style
Xiao S, Wang Z, Li J, Noeller C, Jiang J, Wang J. Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies. Comput Mater Contin. 2026;86(2):1–62. https://doi.org/10.32604/cmc.2025.072146
IEEE Style
S. Xiao, Z. Wang, J. Li, C. Noeller, J. Jiang, and J. Wang, “Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies,” Comput. Mater. Contin., vol. 86, no. 2, pp. 1–62, 2026. https://doi.org/10.32604/cmc.2025.072146



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1077

    View

  • 371

    Download

  • 0

    Like

Share Link