Indexed by:
Abstract:
As an important part of machine learning, deep learning has been intensively used in various fields relevant to data science. Despite of its popularity in practice, it is still of challenging to compute the optimal parameters of a deep neural network, which has been shown to be NP-hard. We devote the present paper to an analysis of deep neural networks with nonatomic congestion games, and expect that this can inspire the computation of optimal parameters of deep neural networks. We consider a deep neural network with linear activation functions of the form x+ b for some biases b that need not be zero. We show under mild conditions that learning the weights and the biases is equivalent to computing the social optimum flow of a nonatomic congestion game. When the deep neural network is for classification, then the learning is even equivalent to computing the equilibrium flow. These results generalize a recent seminar work by [18], who have shown similar results for deep neural networks of linear activation functions with zero biases. © 2021, Springer Nature Switzerland AG.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2021
Volume: 13153 LNCS
Page: 369-379
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: