Narrow your search

Library

KU Leuven (1)

VUB (1)


Resource type

book (1)

dissertation (1)


Language

English (2)


Year
From To Submit

2021 (1)

2020 (1)

Listing 1 - 2 of 2
Sort by

Dissertation
Economic control of Multi-Energy Systems using deep Reinforcement Learning
Authors: --- --- ---
Year: 2020 Publisher: Leuven KU Leuven. Faculteit Ingenieurswetenschappen

Loading...
Export citation

Choose an application

Bookmark

Abstract

The future power system architecture is characterized by a higher presence of intermittent sources of energy, diversity of distributed generation and active consumers. Together with ICT development, a new generation of energy systems is emerging in the form of smart grids, virtual power plants or Multi-Energy Systems (MES). The inherent benefits, such as enabling the penetration of free-CO2 producers, that provide load flexibility or reliability increase, are subject of intelligent operation. In particular, MES coordinate multiple energy vectors in an integrated manner linked through polygeneration plants and other energy conversion technologies. Optimal control of its operation can substantially improve their technical, economical, and environmental performance. For this task, several control methods have been used in the last years, ranging from simple rule-based strategies to more complex Model Predictive Controllers (MPCs). Nevertheless, guided by the recent developments in Machine Learning, the use of data-driven methodologies have proved to out-perform classical control methods when it comes to speeding up the solution of optimization problems and adaptability to changes in the system's dynamics. In this research, the economic control performance of reinforcement learning algorithms for MES is assessed. For this study, a white-box model developed under Modelica is used to configure a city energy network with presence of a cogeneration power plant, renewable sources and a storage system. Then, this model is used as an environment where various control strategies can be tested. The chosen algorithms include proximal policy optimization, representing the state-of-the-art for RL, a mixed-integer linear program embedded into an MPC and a rule-based controller. Results show how the implemented RL agents achieve above 95% optimality where the solution found by running a mixed-integer linear program over the study period is considered the minimum cost. The trained agents learn the system dynamics from experience, achieving up to 4% more overall efficiency in the cogeneration unit, compared to the MPC controller. Furthermore, these methods show significant room for improvement, as their training hyperparameters are not fully optimized. Nevertheless, practical implementation difficulties involve the inclusion of system constraints in the agent and dubious robustness performance.

Keywords


Book
The Human Cost of Food

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords

Listing 1 - 2 of 2
Sort by