Indexed by:
Abstract:
Federated Learning (FL) continues to make significant advances, solving model sharing under privacy-preserving. However, these existing methods are only of limited utility in the Internet of Things (IoT) scenarios, as they either heavily depend on high-quality labeled data or only perform well under idealized conditions, which typically cannot be found in practical applications. As such, a natural problem is how to leverage unlabeled data among multiple clients to optimize sharing model. To address this shortcoming, we propose Federated Contrastive Learning (FedCL), an efficient federated learning method for unsupervised image classification. The proposed FedCL can be summarized in three steps: distributed federated pretraining of the local model using contrastive learning, supervised fine-tuning on a server with few labeled data, and distillation with unlabeled examples on each client for refining and transferring the personalized-specific knowledge. Extensive experiments show that our method outperforms all baseline methods by large margins, including 69.32% top-1 accuracy on CIFAR-10, 85.75% on SVHN, and 74.64% on Mini-ImageNet with the only use of 1% labels.
Keyword:
Reprint Author's Address:
Email:
Source :
COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING, COLLABORATECOM 2022, PT I
ISSN: 1867-8211
Year: 2022
Volume: 460
Page: 115-134
Cited Count:
WoS CC Cited Count: 3
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: