TY - GEN
T1 - Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization
AU - Abdelmoniem, Ahmed M.
AU - Canini, Marco
N1 - KAUST Repository Item: Exported on 2021-05-04
PY - 2021/4/26
Y1 - 2021/4/26
N2 - Federated learning (FL) is increasingly becoming the norm for training models over distributed and private datasets. Major service providers rely on FL to improve services such as text auto-completion, virtual keyboards, and item recommendations. Nonetheless, training models with FL in practice requires significant amount of time (days or even weeks) because FL tasks execute in highly heterogeneous environments where devices only have widespread yet limited computing capabilities and network connectivity conditions.
In this paper, we focus on mitigating the extent of device heterogeneity, which is a main contributing factor to training time in FL. We propose AQFL, a simple and practical approach leveraging adaptive model quantization to homogenize the computing resources of the clients. We evaluate AQFL on five common FL benchmarks. The results show that, in heterogeneous settings, AQFL obtains nearly the same quality and fairness of the model trained in homogeneous settings.
AB - Federated learning (FL) is increasingly becoming the norm for training models over distributed and private datasets. Major service providers rely on FL to improve services such as text auto-completion, virtual keyboards, and item recommendations. Nonetheless, training models with FL in practice requires significant amount of time (days or even weeks) because FL tasks execute in highly heterogeneous environments where devices only have widespread yet limited computing capabilities and network connectivity conditions.
In this paper, we focus on mitigating the extent of device heterogeneity, which is a main contributing factor to training time in FL. We propose AQFL, a simple and practical approach leveraging adaptive model quantization to homogenize the computing resources of the clients. We evaluate AQFL on five common FL benchmarks. The results show that, in heterogeneous settings, AQFL obtains nearly the same quality and fairness of the model trained in homogeneous settings.
UR - http://hdl.handle.net/10754/669058
UR - https://dl.acm.org/doi/10.1145/3437984.3458839
U2 - 10.1145/3437984.3458839
DO - 10.1145/3437984.3458839
M3 - Conference contribution
SN - 9781450382984
BT - Proceedings of the 1st Workshop on Machine Learning and Systems
PB - ACM
ER -