Introducing reinforcement learning to the energy system design process
Design optimization of distributed energy systems has become an interest of a wider group of researchers due the capability of these systems to integrate non-dispatchable renewable energy technologies such as solar PV and wind. White box models, using linear and mixed integer linear programing techniques, are often used in their design. However, the increased complexity of energy flow (especially due to cyber-physical interactions) and uncertainties challenge the application of white box models. This is where data driven methodologies become effective, as they demonstrate higher flexibility to adapt to different environments, which enables their use for energy planning at regional and national scale. This study introduces a data driven approach based on reinforcement learning to design distributed energy systems. Two different neural network architectures are used in this work, i.e. a fully connected neural network and a convolutional neural network (CNN). The novel approach introduced is benchmarked using a grey box model based on fuzzy logic. The grey box model showed a better performance when optimizing simplified energy systems, however it fails to handle complex energy flows within the energy system. Reinforcement learning based on fully connected architecture outperformed the grey box model by improving the objective function values by 60%. Reinforcement learning based on CNN improved the objective function values further (by up to 20% when compared to a fully connected architecture). The results reveal that data-driven models are capable to conduct design optimization of complex energy systems. This opens a new pathway in designing distributed energy systems.
2020
262
114580
REVIEWED