TY - JOUR
T1 - Coordinated Electric Vehicle Active and Reactive Power Control for Active Distribution Networks
AU - Wang, Yi
AU - Qiu, Dawei
AU - Strbac, Goran
AU - Gao, Zhiwei
N1 - Funding information: This work was jointly supported by one EPSRC project “Technology Transformation to Support Flexible and Resilient Local Energy Systems’ EP/T021780/1 and one ESRC Project “Socio-Techno-Economic Pathways for Sustainable Urban Energy Development’ ES/T000112/1 (via the JPI Urban Europe/NSFC Competition). Paper no. TII-21-5904.
PY - 2023/2/1
Y1 - 2023/2/1
N2 - The deployment of renewable energy in power systems may raise serious voltage instabilities. Electric vehicles (EVs), owing to their mobility and flexibility characteristics, can provide various ancillary services including active and reactive power. However, the distributed control of EVs under such scenarios is a complex decision-making problem with enormous dynamics and uncertainties. Most existing literature employs model-based approaches to formulate active and reactive power control problems, which require full models and are time-consuming. This article proposes a multiagent reinforcement learning algorithm featuring a deep deterministic policy gradient (DDPG) method and a parameter sharing framework to solve the EVs' coordinated active and reactive power control problem toward both demand-side response and voltage regulations. The proposed algorithm can further enhance the learning stability and scalability with privacy perseverance via the location marginal prices. Simulation results based on a modified IEEE 15-bus network are developed to validate its effectiveness in providing system charging and voltage regulation services. The proposed location marginal price (LMP) PSDDPG algorithm is evaluated to achieve 38%, 16%, and 25% speedup, and 1.58, 0.69, and 0.27 times higher reward over the benchmarks DDPG, TD3, and LMP-DDPG, respectively.
AB - The deployment of renewable energy in power systems may raise serious voltage instabilities. Electric vehicles (EVs), owing to their mobility and flexibility characteristics, can provide various ancillary services including active and reactive power. However, the distributed control of EVs under such scenarios is a complex decision-making problem with enormous dynamics and uncertainties. Most existing literature employs model-based approaches to formulate active and reactive power control problems, which require full models and are time-consuming. This article proposes a multiagent reinforcement learning algorithm featuring a deep deterministic policy gradient (DDPG) method and a parameter sharing framework to solve the EVs' coordinated active and reactive power control problem toward both demand-side response and voltage regulations. The proposed algorithm can further enhance the learning stability and scalability with privacy perseverance via the location marginal prices. Simulation results based on a modified IEEE 15-bus network are developed to validate its effectiveness in providing system charging and voltage regulation services. The proposed location marginal price (LMP) PSDDPG algorithm is evaluated to achieve 38%, 16%, and 25% speedup, and 1.58, 0.69, and 0.27 times higher reward over the benchmarks DDPG, TD3, and LMP-DDPG, respectively.
KW - Active and reactive power control
KW - active distribution networks (ADNs)
KW - electric vehicles (EVs)
KW - location marginal prices (LMPs)
KW - multiagent reinforcement learning (MARL)
UR - http://www.scopus.com/inward/record.url?scp=85129417226&partnerID=8YFLogxK
U2 - 10.1109/TII.2022.3169975
DO - 10.1109/TII.2022.3169975
M3 - Article
SN - 1551-3203
VL - 19
SP - 1611
EP - 1622
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 2
M1 - 09762523
ER -