Journal of Modern Power Systems and Clean Energy

ISSN 2196-5625 CN 32-1884/TK

Real-time Operation Optimization in Active Distribution Networks Based on Multi-agent Deep Reinforcement Learning
Author:
Affiliation:

College of Electrical Engineering, Sichuan University, Chengdu, China

Fund Project:

This work was supported by the National Natural Science Foundation of China (No. 52077146) and Sichuan Science and Technology Program (No. 2023NSFSC1945).

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
    Abstract:

    The increasing integration of intermittent renewable energy sources (RESs) poses great challenges to active distribution networks (ADNs), such as frequent voltage fluctuations. This paper proposes a novel ADN strategy based on multi-agent deep reinforcement learning (MADRL), which harnesses the regulating function of switch state transitions for the real-time voltage regulation and loss minimization. After deploying the calculated optimal switch topologies, the distribution network operator will dynamically adjust the distributed energy resources (DERs) to enhance the operation performance of ADNs based on the policies trained by the MADRL algorithm. Owing to the model-free characteristics and the generalization of deep reinforcement learning, the proposed strategy can still achieve optimization objectives even when applied to similar but unseen environments. Additionally, integrating parameter sharing (PS) and prioritized experience replay (PER) mechanisms substantially improves the strategic performance and scalability. This framework has been tested on modified IEEE 33-bus, IEEE 118-bus, and three-phase unbalanced 123-bus systems. The results demonstrate the significant real-time regulation capabilities of the proposed strategy.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 06,2023
  • Revised:June 30,2023
  • Adopted:
  • Online: May 20,2024
  • Published: