Open Access
ARTICLE
Improved PPO-Based Task Offloading Strategies for Smart Grids
1 College of Electrical Engineering, North China University of Water Resources and Electric Power, Zhengzhou, 450045, China
2 School of Electrical Engineering, Xuchang University, Xuchang, 461000, China
* Corresponding Author: Ya Zhou. Email:
Computers, Materials & Continua 2025, 84(2), 3835-3856. https://doi.org/10.32604/cmc.2025.065465
Received 13 March 2025; Accepted 26 May 2025; Issue published 03 July 2025
Abstract
Edge computing has transformed smart grids by lowering latency, reducing network congestion, and enabling real-time decision-making. Nevertheless, devising an optimal task-offloading strategy remains challenging, as it must jointly minimise energy consumption and response time under fluctuating workloads and volatile network conditions. We cast the offloading problem as a Markov Decision Process (MDP) and solve it with Deep Reinforcement Learning (DRL). Specifically, we present a three-tier architecture—end devices, edge nodes, and a cloud server—and enhance Proximal Policy Optimization (PPO) to learn adaptive, energy-aware policies. A Convolutional Neural Network (CNN) extracts high-level features from system states, enabling the agent to respond continually to changing conditions. Extensive simulations show that the proposed method reduces task latency and energy consumption far more than several baseline algorithms, thereby improving overall system performance. These results demonstrate the effectiveness and robustness of the framework for real-time task offloading in dynamic smart-grid environments.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.