Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.072146
Special Issues
Table of Content

Open Access

REVIEW

Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies

Shaoping Xiao1,*, Zhaoan Wang1, Junchao Li2, Caden Noeller1, Jiefeng Jiang3, Jun Wang4
1 Department of Mechanical Engineering, Iowa Technology Institute, The University of Iowa, Iowa City, IA 52242, USA
2 Talus Renewables, Inc., Austin, TX 78754, USA
3 Department of Psychological and Brain Sciences, Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
4 Department of Chemical and Biochemical Engineering, Iowa Technology Institute, The University of Iowa, Iowa City, IA 52242, USA
* Corresponding Author: Shaoping Xiao. Email: shaoping-xiao@uiowa.edu
(This article belongs to the Special Issue: Advances in Object Detection: Methods and Applications)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.072146

Received 20 August 2025; Accepted 25 October 2025; Published online 21 November 2025

Abstract

The integration of human factors into artificial intelligence (AI) systems has emerged as a critical research frontier, particularly in reinforcement learning (RL), where human-AI interaction (HAII) presents both opportunities and challenges. As RL continues to demonstrate remarkable success in model-free and partially observable environments, its real-world deployment increasingly requires effective collaboration with human operators and stakeholders. This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies. We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration: computational trust modeling, system usability, and decision understandability. Our comprehensive review organizes HAII methods into five key categories: (1) learning from human feedback, including various shaping approaches; (2) learning from human demonstration through inverse RL and imitation learning; (3) shared autonomy architectures for dynamic control allocation; (4) human-in-the-loop querying strategies for active learning; and (5) explainable RL techniques for interpretable policy generation. Recent state-of-the-art works are critically reviewed, with particular emphasis on advances incorporating large language models in human-AI interaction research. To illustrate some concepts, we present three detailed case studies: an empirical trust model for farmers adopting AI-driven agricultural management systems, the implementation of ethical constraints in robotic motion planning through human-guided RL, and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm. These applications demonstrate how HAII principles can enhance RL systems' practical utility while bridging the gap between theoretical RL and real-world human-centered applications, ultimately contributing to more deployable and socially beneficial intelligent systems.

Keywords

Human-AI interaction; reinforcement learning; partially observable environments; trust model; ethical constraints
  • 188

    View

  • 25

    Download

  • 1

    Like

Share Link