TY - JOUR
T1 - Secure energy management of multi-energy microgrid
T2 - A physical-informed safe reinforcement learning approach
AU - Wang, Yi
AU - Qiu, Dawei
AU - Sun, Mingyang
AU - Strbac, Goran
AU - Gao, Zhiwei
N1 - Funding information: This work was supported by two UK EPSRC projects: ‘Integrated Development of Low-Carbon Energy Systems (IDLES): A Whole-System Paradigm for Creating a National Strategy’ (project code: EP/R045518/1) and UK-China project - ‘Technology Transformation to Support Flexible and Resilient Local Energy Systems’ (project code: EP/T021780/1), and one Horizon Europe project: ‘Reliability, Resilience and Defense technology for the griD’ (Grant agreement ID: 101075714) as well as the National Natural Science Foundation of China under Grants 62103371, 52161135201, U20A20159, 62061130220.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - The large-scale integration of distributed energy resources into the energy industry enables the fast transition to a decarbonized future but raises some potential challenges of insecure and unreliable operations. Multi-energy Microgrids (MEMGs), as localized small multi-energy systems, can effectively integrate a variety of energy components with multiple energy sectors, which have been recently recognized as a valid solution to improve the operational security and reliability. As a result, a massive amount of research has been conducted to investigate MEMG energy management problems, including both model-based optimization and model-free learning approaches. Compared to optimization approaches, reinforcement learning is being widely deployed in MEMG energy management problems owing to its ability to handle highly dynamic and stochastic processes without knowing any system knowledge. However, it is still difficult for conventional model-free reinforcement learning methods to capture the physical constraints of the MEMG model, which may therefore destroy its secure operation. To address this research challenge, this paper proposes a novel safe reinforcement learning method by learning a dynamic security assessment rule to abstract a physical-informed safety layer on top of the conventional model-free reinforcement learning energy management policy, which can respect all the physical constraints through mathematically solving an action correction formulation. In this setting, the secure energy management of the MEMG can be guaranteed for both training and test procedures. Extensive case studies based on two integrated systems (i.e., a small 6-bus power and 7-node gas network, and a large 33-bus power and 20-node gas network) are carried out to verify the superior performance of the proposed physical-informed reinforcement learning method in achieving a cost-effective MEMG energy management performance while respecting all the physical constraints, compared to conventional reinforcement learning and optimization approaches.
AB - The large-scale integration of distributed energy resources into the energy industry enables the fast transition to a decarbonized future but raises some potential challenges of insecure and unreliable operations. Multi-energy Microgrids (MEMGs), as localized small multi-energy systems, can effectively integrate a variety of energy components with multiple energy sectors, which have been recently recognized as a valid solution to improve the operational security and reliability. As a result, a massive amount of research has been conducted to investigate MEMG energy management problems, including both model-based optimization and model-free learning approaches. Compared to optimization approaches, reinforcement learning is being widely deployed in MEMG energy management problems owing to its ability to handle highly dynamic and stochastic processes without knowing any system knowledge. However, it is still difficult for conventional model-free reinforcement learning methods to capture the physical constraints of the MEMG model, which may therefore destroy its secure operation. To address this research challenge, this paper proposes a novel safe reinforcement learning method by learning a dynamic security assessment rule to abstract a physical-informed safety layer on top of the conventional model-free reinforcement learning energy management policy, which can respect all the physical constraints through mathematically solving an action correction formulation. In this setting, the secure energy management of the MEMG can be guaranteed for both training and test procedures. Extensive case studies based on two integrated systems (i.e., a small 6-bus power and 7-node gas network, and a large 33-bus power and 20-node gas network) are carried out to verify the superior performance of the proposed physical-informed reinforcement learning method in achieving a cost-effective MEMG energy management performance while respecting all the physical constraints, compared to conventional reinforcement learning and optimization approaches.
KW - Dynamic security assessment
KW - Energy management
KW - Multi-energy microgrid
KW - Physical-informed safety layer
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85146969967&partnerID=8YFLogxK
U2 - 10.1016/j.apenergy.2023.120759
DO - 10.1016/j.apenergy.2023.120759
M3 - Article
AN - SCOPUS:85146969967
SN - 0306-2619
VL - 335
JO - Applied Energy
JF - Applied Energy
M1 - 120759
ER -