Indexed by:
Abstract:
As many countries have promulgated laws on the protection of user data privacy, how to legally use user data has become a hot topic. With the emergence of collaborative learning, also known as federated learning, multiple participants can create a common, robust, and secure machine learning model aimed at addressing such critical issues of data sharing as privacy, security, and access, etc. Unfortunately, existing research shows that collaborative learning is not as secure as it claims, and the gradient leakage is still a key problem. To deal with this problem, a collaborative learning solution based on chained secure multi-party computing has been proposed recently. However, there are two security issues in this scheme that remain unsolved. First, if semi-honest users collude, the honest users' gradient also leaks. Second, if one of the users fails, it also cannot guarantee the correctness of the aggregation results. In this paper, we propose a privacy-preserving and verifiable chain collaborative learning scheme to solve this problem. First, we design a gradient encryption method, which can solve the problem of gradient leakage. Second, we create a verifiable method based on homomorphic hash technology. This method can ensure the correctness of users' aggregation results. At the same time, it can also track users who aggregate wrong. Compared with other solutions, our scheme is more efficient. © 2021 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2021
Page: 428-433
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: