Open Access iconOpen Access

ARTICLE

crossmark

Extending DDPG with Physics-Informed Constraints for Energy-Efficient Robotic Control

Abubakar Elsafi1,*, Arafat Abdulgader Mohammed Elhag2, Lubna A. Gabralla3, Ali Ahmed4, Ashraf Osman Ibrahim5

1 Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah, 21959, Saudi Arabia
2 Department of Computer and Information System, Bisha Applied College, University of Bisha, Bisha, 61911, Saudi Arabia
3 Department of Computer Science, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, SaudiArabia
4 Faculty of Computing and Information Technology, King Abdulaziz University-Rabigh, Rabigh, 21589, Saudi Arabia
5 Department of Computing, Universiti Teknologi PETRONAS, Seri Iskandar, 32610, Malaysia

* Corresponding Author: Abubakar Elsafi. Email: email

(This article belongs to the Special Issue: Advanced Artificial Intelligence and Machine Learning Methods Applied to Energy Systems)

Computer Modeling in Engineering & Sciences 2025, 145(1), 621-647. https://doi.org/10.32604/cmes.2025.072726

Abstract

Energy efficiency stands as an essential factor when implementing deep reinforcement learning (DRL) policies for robotic control systems. Standard algorithms, including Deep Deterministic Policy Gradient (DDPG), primarily optimize task rewards but at the cost of excessively high energy consumption, making them impractical for real-world robotic systems. To address this limitation, we propose Physics-Informed DDPG (PI-DDPG), which integrates physics-based energy penalties to develop energy-efficient yet high-performing control policies. The proposed method introduces adaptive physics-informed constraints through a dynamic weighting factor (), enabling policies that balance reward maximization with energy savings. Our motivation is to overcome the impracticality of reward-only optimization by designing controllers that achieve competitive performance while substantially reducing energy consumption. PI-DDPG was evaluated in nine MuJoCo continuous control environments, where it demonstrated significant improvements in energy efficiency without compromising stability or performance. Experimental results confirm that PI-DDPG substantially reduces energy consumption compared to standard DDPG, while maintaining competitive task performance. For instance, energy costs decreased from 5542.98 to 3119.02 in HalfCheetah-v4 and from 1909.13 to 1586.75 in Ant-v4, with stable performance in Hopper-v4 (205.95 vs. 130.82) and InvertedPendulum-v4 (322.97 vs. 311.29). Although DDPG sometimes yields higher rewards, such as in HalfCheetah-v4 (5695.37 vs. 4894.59), it requires significantly greater energy expenditure. These results highlight PI-DDPG as a promising energy-conscious alternative for robotic control.

Keywords

Physics-informed DDPG; energy-efficient RL; robotic control; continuous control tasks; MuJoCo environments; reward-energy trade-off

Cite This Article

APA Style
Elsafi, A., Mohammed Elhag, A.A., Gabralla, L.A., Ahmed, A., Ibrahim, A.O. (2025). Extending DDPG with Physics-Informed Constraints for Energy-Efficient Robotic Control. Computer Modeling in Engineering & Sciences, 145(1), 621–647. https://doi.org/10.32604/cmes.2025.072726
Vancouver Style
Elsafi A, Mohammed Elhag AA, Gabralla LA, Ahmed A, Ibrahim AO. Extending DDPG with Physics-Informed Constraints for Energy-Efficient Robotic Control. Comput Model Eng Sci. 2025;145(1):621–647. https://doi.org/10.32604/cmes.2025.072726
IEEE Style
A. Elsafi, A. A. Mohammed Elhag, L. A. Gabralla, A. Ahmed, and A. O. Ibrahim, “Extending DDPG with Physics-Informed Constraints for Energy-Efficient Robotic Control,” Comput. Model. Eng. Sci., vol. 145, no. 1, pp. 621–647, 2025. https://doi.org/10.32604/cmes.2025.072726



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 281

    View

  • 155

    Download

  • 0

    Like

Share Link