Indexed by:
Abstract:
Personalized Federated Learning (PFL) aims to train machine learning models on decentralized, heterogeneous data while preserving user privacy. This research survey examines the core communication challenges in PFL and evaluates optimization strategies to address key issues, including data heterogeneity, high communication costs, model drift, privacy vulnerabilities, and device variability. We provide a comprehensive analysis of key communication optimization techniques; Model Compression, Differential Privacy, Client Selection, Asynchronous Updates, Gradient Compression, and Model Caching, by their efficiency and effectiveness under diverse PFL conditions. Our study quantitatively compares these methods, identifies limitations, and proposes enhanced strategies to improve communication efficiency, reduce latency, and maintain model accuracy. This research delivers actionable insights for optimizing PFL communication, enhancing both model performance and privacy safeguards. Overall, this work serves as a valuable resource for researchers and practitioners, offering practical guidance on leveraging advanced communication techniques to drive PFL improvements and highlighting promising directions for future research.
Keyword:
Reprint Author's Address:
Email:
Source :
INFORMATION FUSION
ISSN: 1566-2535
Year: 2025
Volume: 117
1 8 . 6 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 15
Affiliated Colleges: