Fast Server Learning Rate Tuning for Coded Federated Dropout

Giacomo Verardo, Daniel Barreira, Marco Chiesa, Dejan Kostic, Gerald Q. Maguire

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

In Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters via updates instead of transmitting potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this chapter, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also reaching up to the same final accuracy as the no dropout case. Evaluations on the EMNIST dataset show that our mechanism achieves 99.6% of the final accuracy of the no dropout case while requiring 2.43 × less bandwidth to achieve this level of accuracy.
Original languageEnglish (US)
Title of host publicationTrustworthy Federated Learning
PublisherSpringer International Publishing
Pages84-99
Number of pages16
ISBN (Print)9783031289958
DOIs
StatePublished - Mar 29 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Fast Server Learning Rate Tuning for Coded Federated Dropout'. Together they form a unique fingerprint.

Cite this