Show simple item record

dc.contributor.authorMills, J
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.contributor.authorJin, R
dc.contributor.authorZheng, S
dc.contributor.authorWang, J
dc.date.accessioned2022-10-04T10:38:06Z
dc.date.issued2022-10-10
dc.date.updated2022-10-03T21:58:15Z
dc.description.abstractFederated Learning (FL) is a recent development in distributed machine learning that collaboratively trains models without training data leaving client devices, preserving data privacy. In real-world FL, the training set is distributed over clients in a highly non-Independent and Identically Distributed (non-IID) fashion, harming model convergence speed and final performance. To address this challenge, we propose a novel, generalised approach for incorporating adaptive optimisation into FL with the Federated Global Biased Optimiser (FedGBO) algorithm. FedGBO accelerates FL by employing a set of global biased optimiser values during training, reducing ‘client-drift’ from non-IID data whilst benefiting from adaptive optimisation. We show that in FedGBO, updates to the global model can be reformulated as centralised training using biased gradients and optimiser updates, and apply this framework to prove FedGBO’s convergence on nonconvex objectives when using the momentum-SGD (SGDm) optimiser. We also conduct extensive experiments using 4 FL benchmark datasets (CIFAR100, Sent140, FEMNIST, Shakespeare) and 3 popular optimisers (SGDm, RMSProp, Adam) to compare FedGBO against six state-of-the-art FL algorithms. The results demonstrate that FedGBO displays superior or competitive performance across the datasets whilst having low data-upload and computational costs, and provide practical insights into the trade-offs associated with different adaptive-FL algorithms and optimisers.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.description.sponsorshipRoyal Societyen_GB
dc.description.sponsorshipEuropean Union Horizon 2020en_GB
dc.identifier.citationPublished online 10 October 2022en_GB
dc.identifier.doi10.1109/TC.2022.3212631
dc.identifier.grantnumberIEC/NSFC/211460en_GB
dc.identifier.grantnumber101008297en_GB
dc.identifier.urihttp://hdl.handle.net/10871/131081
dc.identifierORCID: 0000-0001-5406-8420 (Hu, Jia)
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rights© 2022 IEEE
dc.subjectFederated Learningen_GB
dc.subjectEdge Computingen_GB
dc.subjectCommunication Efficiencyen_GB
dc.subjectOptimisationen_GB
dc.titleAccelerating Federated Learning with a Global Biased Optimiseren_GB
dc.typeArticleen_GB
dc.date.available2022-10-04T10:38:06Z
dc.identifier.issn1557-9956
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.journalIEEE Transactions on Computersen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2022-09-23
dcterms.dateSubmitted2022-10-02
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2022-09-23
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2022-10-03T21:58:19Z
refterms.versionFCDAM
refterms.dateFOA2022-10-20T12:55:02Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record