Indexed by:
Abstract:
Deep Q Network (DQN) takes the entire game interface as input, makes use of neural network to output Q value, and maps it into actions. However, the contribution of different game interfaces to Q value often varies, and sometimes only a few interfaces are closely related to the execution of agents. Hence, we propose a deep reinforcement learning model based on multi-experience pool local state parallel Q-Network (MEPLSPQ-Network), which takes the advantage of multiple parallel Q networks to predict Q values collaboratively. In this model, the input of each Q network is the non-overlapping sub-region of the original game interface, and subsequently each Q network will study respectively what characteristics different sub-regions of the game interface have. Experimental results indicate that the performance of MEPLSPQNetwork exceeds that of DQN in three various game scenes. © 2019 Association for Computing Machinery.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2019
Page: 98-102
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: