Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.070583
Special Issues
Table of Content

Open Access

ARTICLE

A Multi-Objective Adaptive Car-Following Framework for Autonomous Connected Vehicles with Deep Reinforcement Learning

Abu Tayab1,*, Yanwen Li1, Ahmad Syed2, Ghanshyam G. Tejani3,4,*, Doaa Sami Khafaga5, El-Sayed M. El-kenawy6, Amel Ali Alhussan7, Marwa M. Eid8,9
1 Department of Mechanical Engineering, Yanshan University, Qinhuangdao, 066004, China
2 Department of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
3 Department of Research Analytics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, 600077, India
4 Applied Science Research Center, Applied Science Private University, Amman, 11937, Jordan
5 Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
6 Department of Programming, School of Information and Communications Technology (ICT), Bahrain Polytechnic, Isa Town, P.O. Box 33349, Bahrain
7 Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
8 Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura, 11152, Egypt
9 Department Jadara Research Center, Jadara University, Irbid, 21110, Jordan
* Corresponding Author: Abu Tayab. Email: email; Ghanshyam G. Tejani. Email: email
(This article belongs to the Special Issue: Advances in Vehicular Ad-Hoc Networks (VANETs) for Intelligent Transportation Systems)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.070583

Received 19 July 2025; Accepted 29 September 2025; Published online 03 November 2025

Abstract

Autonomous connected vehicles (ACV) involve advanced control strategies to effectively balance safety, efficiency, energy consumption, and passenger comfort. This research introduces a deep reinforcement learning (DRL)-based car-following (CF) framework employing the Deep Deterministic Policy Gradient (DDPG) algorithm, which integrates a multi-objective reward function that balances the four goals while maintaining safe policy learning. Utilizing real-world driving data from the highD dataset, the proposed model learns adaptive speed control policies suitable for dynamic traffic scenarios. The performance of the DRL-based model is evaluated against a traditional model predictive control-adaptive cruise control (MPC-ACC) controller. Results show that the DRL model significantly enhances safety, achieving zero collisions and a higher average time-to-collision (TTC) of 8.45 s, compared to 5.67 s for MPC and 6.12 s for human drivers. For efficiency, the model demonstrates 89.2% headway compliance and maintains speed tracking errors below 1.2 m/s in 90% of cases. In terms of energy optimization, the proposed approach reduces fuel consumption by 5.4% relative to MPC. Additionally, it enhances passenger comfort by lowering jerk values by 65%, achieving 0.12 m/s3 vs. 0.34 m/s3 for human drivers. A multi-objective reward function is integrated to ensure stable policy convergence while simultaneously balancing the four key performance metrics. Moreover, the findings underscore the potential of DRL in advancing autonomous vehicle control, offering a robust and sustainable solution for safer, more efficient, and more comfortable transportation systems.

Keywords

Car-following model; DDPG; multi-objective framework; autonomous connected vehicles
  • 404

    View

  • 150

    Download

  • 0

    Like

Share Link