Indexed by:
Abstract:
In the federated learning scenario, the private data are kept local, and gradients are shared to train the global model. Because gradients are updated according to the private training data, the features of the data are encoded into gradients. Prior work proved the possibility of reconstructing the private training data based on gradients. However, only a small batch of images can be recovered, and the reconstruction quality, especially against the large batch size of images, is unsatisfactory. To improve the quality of reconstruction of a large batch of images, a generative gradient inversion attack based on a regulation term is designed, which is called fDLG. First, a regulation term that can avoid drastic variations within image regions is proposed, which is based on the cognition that changes between image pixels are gradual. The proposed regulation term encourages the synthesized dummy image to be piece-wise smooth. Second, generative adversarial networks are trained to improve the quality of the attack with the global model used as a discriminator. Simulation shows that large batches of images (128 images on CIFAR100, 256 images on MNIST) can be faithfully reconstructed at high resolution, and even large images from ImageNet can be reconstructed. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
Keyword:
Reprint Author's Address:
Email:
Source :
Neural Computing and Applications
ISSN: 0941-0643
Year: 2024
Issue: 24
Volume: 36
Page: 14661-14672
6 . 0 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: