Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization

Ahmed M. Abdelmoniem, Marco Canini

Research output: Chapter in Book/Report/Conference proceedingConference contribution

35 Scopus citations

Abstract

Federated learning (FL) is increasingly becoming the norm for training models over distributed and private datasets. Major service providers rely on FL to improve services such as text auto-completion, virtual keyboards, and item recommendations. Nonetheless, training models with FL in practice requires significant amount of time (days or even weeks) because FL tasks execute in highly heterogeneous environments where devices only have widespread yet limited computing capabilities and network connectivity conditions. In this paper, we focus on mitigating the extent of device heterogeneity, which is a main contributing factor to training time in FL. We propose AQFL, a simple and practical approach leveraging adaptive model quantization to homogenize the computing resources of the clients. We evaluate AQFL on five common FL benchmarks. The results show that, in heterogeneous settings, AQFL obtains nearly the same quality and fairness of the model trained in homogeneous settings.
Original languageEnglish (US)
Title of host publicationProceedings of the 1st Workshop on Machine Learning and Systems
PublisherACM
ISBN (Print)9781450382984
DOIs
StatePublished - Apr 26 2021

Fingerprint

Dive into the research topics of 'Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization'. Together they form a unique fingerprint.

Cite this