Special Issues
Table of Content

Reinforcement Learning: Algorithms, Challenges, and Applications

Submission Deadline: 31 December 2025 View: 779 Submit to Special Issue

Guest Editors

Prof. Yu-Hsien Lin

Email: vyhlin@mail.ncku.edu.tw

Affiliation: Dept. System and Naval Mechatronic Engineering, National Cheng Kung University, Tainan, 701, Taiwan

Homepage:

Research Interests: computational fluid dynamics, deep reinforcement learning, unmanned vehicle control

图片2.png



Summary

Reinforcement Learning (RL) has emerged as a pivotal domain within artificial intelligence, driving advancements in decision-making, optimization, and autonomy. This special issue, titled "Reinforcement Learning: Algorithms, Challenges, and Applications," aims to bring together cutting-edge research and practical insights that address the evolving landscape of RL. We invite contributions that explore novel algorithms, tackle persistent challenges such as scalability and safety, and showcase transformative applications across industries including robotics, healthcare, finance, and gaming. This issue seeks to provide a comprehensive overview of RL's current state while fostering discussions on its future directions. Both theoretical advancements and empirical studies are welcomed, with an emphasis on interdisciplinary approaches and real-world implementations.

Topics:
• Novel reinforcement learning algorithms
• Scalable RL techniques for high-dimensional environments
• RL safety, robustness, and ethical considerations
• Applications of RL in robotics, healthcare, and autonomous systems
• Multi-agent reinforcement learning and cooperation dynamics
• Model-based vs. model-free RL methods
• RL benchmarks, datasets, and evaluation metrics


Keywords

Reinforcement Learningm, Algorithms, Scalability, Applications, Safety, Multi-Agent Systems, Optimization

Published Papers


  • Open Access

    ARTICLE

    Adaptive Path-Planning for Autonomous Robots: A UCH-Enhanced Q-Learning Approach

    Wei Liu, Ruiyang Wang, Guangwei Liu
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2025.070328
    (This article belongs to the Special Issue: Reinforcement Learning: Algorithms, Challenges, and Applications)
    Abstract Q-learning is a classical reinforcement learning method with broad applicability. It can respond effectively to environmental changes and provide flexible strategies, making it suitable for solving robot path-planning problems. However, Q-learning faces challenges in search and update efficiency. To address these issues, we propose an improved Q-learning (IQL) algorithm. We use an enhanced Ant Colony Optimization (ACO) algorithm to optimize Q-table initialization. We also introduce the UCH mechanism to refine the reward function and overcome the exploration dilemma. The IQL algorithm is extensively tested in three grid environments of different scales. The results validate the… More >

  • Open Access

    ARTICLE

    Simultaneous Depth and Heading Control for Autonomous Underwater Vehicle Docking Maneuvers Using Deep Reinforcement Learning within a Digital Twin System

    Yu-Hsien Lin, Po-Cheng Chuang, Joyce Yi-Tzu Huang
    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4907-4948, 2025, DOI:10.32604/cmc.2025.065995
    (This article belongs to the Special Issue: Reinforcement Learning: Algorithms, Challenges, and Applications)
    Abstract This study proposes an automatic control system for Autonomous Underwater Vehicle (AUV) docking, utilizing a digital twin (DT) environment based on the HoloOcean platform, which integrates six-degree-of-freedom (6-DOF) motion equations and hydrodynamic coefficients to create a realistic simulation. Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements, deep reinforcement learning (DRL) offers a promising alternative. In the positioning stage, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm is employed for synchronized depth and heading control, which offers stable training, reduced overestimation… More >

Share Link