Show simple item record

dc.contributor.authorMills, J
dc.contributor.authorHu, J
dc.contributor.authorMin, G
dc.date.accessioned2021-08-12T09:17:49Z
dc.date.issued2021-07-21
dc.description.abstractFederated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to 5 when using existing FL optimisation strategies, and with a further 3 improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.description.sponsorshipEuropean Union Horizon 2020en_GB
dc.identifier.citationPublished online 21 July 2021en_GB
dc.identifier.doi10.1109/TPDS.2021.3098467
dc.identifier.grantnumber101008297en_GB
dc.identifier.urihttp://hdl.handle.net/10871/126748
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rights© 2021 IEEEen_GB
dc.subjectFederated Learningen_GB
dc.subjectMulti-Task Learningen_GB
dc.subjectDeep Learningen_GB
dc.subjectEdge Computingen_GB
dc.subjectAdaptive Optimizationen_GB
dc.titleMulti-Task Federated Learning for Personalised Deep Neural Networks in Edge Computingen_GB
dc.typeArticleen_GB
dc.date.available2021-08-12T09:17:49Z
dc.identifier.issn1045-9219
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.identifier.journalIEEE Transactions on Parallel and Distributed Systemsen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
exeter.funder::Engineering and Physical Sciences Research Council (EPSRC)en_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2021-07-21
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2021-08-12T09:15:51Z
refterms.versionFCDAM
refterms.dateFOA2021-08-12T09:18:01Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record