Home / Journals / CMC / Online First / doi:10.32604/cmc.2025.063979
Special Issues
Table of Content

Open Access

ARTICLE

Intelligent Scheduling of Virtual Power Plants Based on Deep Reinforcement Learning

Shaowei He, Wenchao Cui*, Gang Li, Hairun Xu, Xiang Chen, Yu Tai
School of Control and Computer Engineering, North China Electric Power University, Beijing, 102206, China
* Corresponding Author: Wenchao Cui. Email: email
(This article belongs to the Special Issue: Artificial Intelligence Algorithms and Applications)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2025.063979

Received 31 January 2025; Accepted 26 March 2025; Published online 22 April 2025

Abstract

The Virtual Power Plant (VPP), as an innovative power management architecture, achieves flexible dispatch and resource optimization of power systems by integrating distributed energy resources. However, due to significant differences in operational costs and flexibility of various types of generation resources, as well as the volatility and uncertainty of renewable energy sources (such as wind and solar power) and the complex variability of load demand, the scheduling optimization of virtual power plants has become a critical issue that needs to be addressed. To solve this, this paper proposes an intelligent scheduling method for virtual power plants based on Deep Reinforcement Learning (DRL), utilizing Deep Q-Networks (DQN) for real-time optimization scheduling of dynamic peaking unit (DPU) and stable baseload unit (SBU) in the virtual power plant. By modeling the scheduling problem as a Markov Decision Process (MDP) and designing an optimization objective function that integrates both performance and cost, the scheduling efficiency and economic performance of the virtual power plant are significantly improved. Simulation results show that, compared with traditional scheduling methods and other deep reinforcement learning algorithms, the proposed method demonstrates significant advantages in key performance indicators: response time is shortened by up to 34%, task success rate is increased by up to 46%, and costs are reduced by approximately 26%. Experimental results verify the efficiency and scalability of the method under complex load environments and the volatility of renewable energy, providing strong technical support for the intelligent scheduling of virtual power plants.

Keywords

Deep reinforcement learning; deep q-network; virtual power plant; lntelligent scheduling; markov decision process
  • 268

    View

  • 127

    Download

  • 0

    Like

Share Link