Indexed by:
Abstract:
As many countries have promulgated laws to protect users' data privacy, how to legally use users' data has become a hot topic. With the emergence of federated learning (FL) (also known as collaborative learning), multiple participants can create a common, robust, and secure machine learning model while addressing key issues in data sharing, such as privacy, security, accessibility, etc. Unfortunately, existing research shows that FL is not as secure as it claims, gradient leakage and the correctness of aggregation results are still key problems. Recently, some scholars try to address these security problems in FL by cryptography and verification techniques. However, there are some issues in this scheme that remain unsolved. First, some solutions cannot guarantee the correctness of the aggregation results. Second, existing state-of-the-art FL schemes have a costly computational and communication overhead. In this article, we propose SVFLC, a secure and verifiable FL scheme with chain aggregation to solve these problems. We first design a privacy-preserving method that can solve the problem of gradient leakage and defend against collusion attacks by semi-honest users. Then, we create a verifiable method based on a homomorphic hash function, which can ensure the correctness of the weighted aggregation results. Besides, the SVFLC can also track users who encounter calculation errors during the aggregation process. Additionally, the extensive experiment results on real-world data sets demonstrate that the SVFLC is efficient, compared with other solutions. © 2014 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Internet of Things Journal
ISSN: 2327-4662
Year: 2024
Issue: 8
Volume: 11
Page: 13125-13136
1 0 . 6 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: