Federated Learning (FL) is a popular privacy-preserving machine learning paradigm, enabling collaborative training among distributed devices coordinated by a central server, without gathering the devices? local data. Recent studies
indicate that device-to-device (D2D) communication technology has the potential to reduce reliance on the central server and enhance the scalability of FL. However, complex heterogeneities in wireless D2D network environments lead to degradation in learning efficiency and hamper global convergence of D2D assisted FL. To address this important problem, we propose FedAHC, an Asynchronous Hierarchical Clustered FL method
based on Graph Convolutional Networks (GCN). To effectively mitigate the impact of computational and communicational heterogeneities on D2D-assisted FL, we utilize a clustering approach for FL devices within the D2D network, formulate it as a graph problem, and design an unsupervised learning strategy powered by GCN to obtain effective cluster assignments adhering to D2D link connectivity. Meanwhile, a global optimizer state is introduced into FedAHC to reduce the training model drift caused by heterogeneous data across devices. We theoretically prove the convergence of this new method by deriving an upper bound on the global loss function. We conduct extensive experiments with various network scenarios and datasets to demonstrate the performance of FedAHC. In comparison to key baselines, FedAHC converges to a higher model accuracy, while exhibiting up to 80% improvement in time efficiency and up to 62% reduction in communication costs.<p></p>
Funding
Intelligent and Sustainable Aerial-Terrestrial IoT Networks