Files

Abstract

Occupant behavior, defined as the presence and energy-related actions of occupants, is today known as a key driver of building energy use. Closing the gap between what is provided by building energy systems and what is actually needed by occupants requires a deeper understanding and consideration of the human factor in the building operation. However, occupant behavior is a highly stochastic and complicated phenomenon, driven by a wide variety of factors, and unique in each building. Therefore, it cannot be addressed using analytical approaches traditionally used to describe physics-based aspects of buildings. In conventional control systems, referred to as Expert-based controls in this study, domain experts distill their knowledge into a set of rules, heuristics (rule-based controls), or optimization models (model predictive controls), and program it to the controller. Since they rely on the hard-coded knowledge of experts, they are limited to expert knowledge. Furthermore, they cannot deal with unexpected situations that were not foreseen by the experts. Given the unexpected variations of occupant behavior over time, and its uniqueness in each building which has limited the experts to globally model it, Expert-based controls have a low potential for integrating occupant behavior into building controls. An alternative approach is to program a human-like learning mechanism and develop a controller that is capable of continuously learning and adapting the control policy by itself through interacting with the environment and learning from experience, referred to as Learning-based controls in this study. Reinforcement Learning, a Machine Learning algorithm inspired by neuroscience, can be used to develop such a learning-based controller. Given the learning ability, these controllers are able to learn optimal control policy from scratch, without prior knowledge or a detailed system model, and can continuously adapt to the stochastic variations in the environment to ensure an optimal operation. These aspects make Reinforcement Learning a promising approach for integrating occupant behavior into building controls. The main question that this study deals with is: How to develop a controller that can perceive and adapt to the occupant behavior to minimize energy use without compromising user needs In this context, the methodological framework of this dissertation is aimed at contributing to new knowledge by developing three occupant-centric control frameworks: DeepHot: focused on hot water production in residential buildings; DeepSolar: focused on solar-assisted space heating and hot water production in residential buildings; DeepValve: focused on space heating in offices; In developing these frameworks, special attention is paid to: 1. Transferability: To be easily transferred to many buildings; 2. Data efficiency: To quickly learn optimal control when implemented on a new building; 3. Safety: To impose minimum risk on violating occupant comfort or health; 4. Minimal use of sensors and actuators: To reduce the initial cost and risk of failure and facilitate filed implementations; The DeepHot and DeepSolar are evaluated using real-world weather data and hot water use behavior measured in Swiss residential houses. DeepValve is also first evaluated using real-world occupancy data collected from other studies, and then experimentally implemented in an environmental chamber. Comparison of these frameworks with common practi

Details

Actions

Preview