Home / Journals / ENERGY / Online First / doi:10.32604/ee.2025.071006
Special Issues

Open Access

ARTICLE

A Regional Distribution Network Coordinated Optimization Strategy for Electric Vehicle Clusters Based on Parametric Deep Reinforcement Learning

Lei Su1,2,3, Wanli Feng1,2,3, Cao Kan1,2,3, Mingjiang Wei1,2,3, Jihai Wang4, Pan Yu4, Lingxiao Yang5,*
1 State Grid Hubei Electric Power Research Institute, Wuhan, 430000, China
2 Hubei Key Laboratory of Regional New Power Systems and Rural Energy System Configuration, Wuhan, 430000, China
3 Hubei Engineering Research Center of the Construction and Operation Control Technology of New Power Systems, Wuhan, 430000, China
4 School of Electrical Engineering and Automation, Anhui University, Hefei, 230601, China
5 School of Artificial Intelligence, Anhui University, Hefei, 230601, China
* Corresponding Author: Lingxiao Yang. Email: email
(This article belongs to the Special Issue: Grid Integration of Intermittent Renewable Energy Resources: Technologies, Policies, and Operational Strategies)

Energy Engineering https://doi.org/10.32604/ee.2025.071006

Received 29 July 2025; Accepted 17 October 2025; Published online 10 November 2025

Abstract

To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources (DERs) (such as photovoltaic (PV) systems, wind turbines (WT), and energy storage (ES) devices), and the increased grid load fluctuations and safety risks due to uncoordinated electric vehicles (EVs) charging, this paper proposes a novel dual-scale hierarchical collaborative optimization strategy. This strategy decouples system-level economic dispatch from distributed EV agent control, effectively solving the resource coordination conflicts arising from the high computational complexity, poor scalability of existing centralized optimization, or the reliance on local information decision-making in fully decentralized frameworks. At the lower level, an EV charging and discharging model with a hybrid discrete-continuous action space is established, and optimized using an improved Parameterized Deep Q-Network (PDQN) algorithm, which directly handles mode selection and power regulation while embedding physical constraints to ensure safety. At the upper level, microgrid (MG) operators adopt a dynamic pricing strategy optimized through Deep Reinforcement Learning (DRL) to maximize economic benefits and achieve peak-valley shaving. Simulation results show that the proposed strategy outperforms traditional methods, reducing the total operating cost of the MG by 21.6%, decreasing the peak-to-valley load difference by 33.7%, reducing the number of voltage limit violations by 88.9%, and lowering the average electricity cost for EV users by 15.2%. This method brings a win-win result for operators and users, providing a reliable and efficient scheduling solution for distribution networks with high renewable energy penetration rates.

Keywords

Power system; regional distributed energy; electric vehicle; deep reinforcement learning; collaborative optimization
  • 262

    View

  • 142

    Download

  • 0

    Like

Share Link