• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Wang, D. (Wang, D..) | Zheng, W. (Zheng, W..) | Wang, Z. (Wang, Z..) | Wang, Y. (Wang, Y..) | Pang, X. (Pang, X..) | Wang, W. (Wang, W..)

Indexed by:

EI Scopus SCIE

Abstract:

Advanced controls could enhance buildings’ energy efficiency and operational flexibility while guaranteeing the indoor comfort. The control performance of reinforcement learning (RL) and model predictive control (MPC) have been widely studied in the literature. However, in existing studies, the reinforcement learning and model predictive control are tested in separate environments, making it challenging to directly compare their performance. In this paper, RL and MPC controls are implemented and compared with traditional rule-based controls in an open-source virtual environment to control a heat pump system of a residential house. The RL controllers were developed with three widely-used algorithms: Deep Deterministic Policy Gradient (DDPG), Dueling Deep Q Networks (DDQN), and Soft Actor Critic (SAC), and the MPC controller was developed using reduced-order thermal resistance-capacity network model. The building optimization testing (BOPTEST) framework is employed as a standardized virtual building simulator to conduct this study. The test case BOPTEST Hydronic Heat Pump is selected for the assessment and benchmarking of the control performance, data efficiency, implementation efforts and computational demands of the RL and MPC controllers. The comparison results revealed that for the RL controllers, only the DDPG algorithm outperforms the baseline controller in both the typical and peak heating scenarios. The MPC controller is superior to the RL and baseline controllers in both two scenarios because it can take the best possible action based on the current system state even with a model that deviates to a certain degree from reality. The findings of this study shed light on the selection of advanced building controllers among two promising candidates: MPC and RL. © 2023 Elsevier Ltd

Keyword:

Reinforcement learning Building controls Model predictive control BOPTEST

Author Community:

  • [ 1 ] [Wang D.]Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong
  • [ 2 ] [Wang D.]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 3 ] [Zheng W.]Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong
  • [ 4 ] [Zheng W.]HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Shenzhen, Futian, China
  • [ 5 ] [Wang Z.]Department of Civil and Environmental Engineering, The Hong Kong University of Science and Technology, Hong Kong
  • [ 6 ] [Wang Z.]HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Shenzhen, Futian, China
  • [ 7 ] [Wang Y.]School of Environmental Science and Engineering, Tianjin University, Tianjin, 300350, China
  • [ 8 ] [Pang X.]National Engineering Research Center for Building Technology, Beijing, China
  • [ 9 ] [Pang X.]China Academy of Building Research, Beijing, China
  • [ 10 ] [Wang W.]Beijing Key Laboratory of Green Built Environment and Energy Efficient Technology, Beijing University of Technology, Beijing, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Source :

Applied Thermal Engineering

ISSN: 1359-4311

Year: 2023

Volume: 228

6 . 4 0 0

JCR@2022

ESI Discipline: ENGINEERING;

ESI HC Threshold:19

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 57

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 19

Affiliated Colleges:

Online/Total:718/10708543
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.