Heterogeneous computation and communication resources across mobile devices drastically degrade the performance of Federated Learning (FL), while clustered FL is recognized as an effective solution to this issue. Traditional clustered FL methods rely on a cluster head for intra-cluster model aggregation, however, such a cluster head ...
Heterogeneous computation and communication resources across mobile devices drastically degrade the performance of Federated Learning (FL), while clustered FL is recognized as an effective solution to this issue. Traditional clustered FL methods rely on a cluster head for intra-cluster model aggregation, however, such a cluster head that can directly communicate with all other devices may not exist in practical Device-to-Device (D2D) networks. Besides, most methods consider static network conditions and thus cannot adapt to the dynamic topologies and resources in D2D networks. To address these challenges, we propose a Transferable Graph Neural Network (GNN)-based Clustered FL method, which formulates FL clustering in dynamic D2D networks as a graph problem and develops a transferable GNN model using unsupervised training to adaptively solve this problem. Furthermore, to alleviate the impact of data heterogeneity and accelerate FL, we design a D2D connectivity-aware dynamic programming algorithm driven by Mutual Information for selecting participating devices within each cluster. We also provide a convergence bound for the global loss through theoretical analysis. Finally, we conduct extensive experiments with various network and data settings, and the results demonstrate that our method improves FL time efficiency by 24%-78% and reduces communication cost by 30%-88% compared to key baselines.