DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning

Hang Xu, Kelly Kostopoulou, Aritra Dutta, Xin Li, Alexandros Ntoulas, Panos Kalnis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

23 Scopus citations

Abstract

Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network's gradients, or, as a result of an explicit sparsification process. Existing communication primitives are agnostic to the challenges of deep learning; consequently, they impose unnecessary communication overhead. This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored to federated deep learning. DeepReduce decomposes sparse tensors into two sets, values and indices, and allows both independent and combined compression of these sets. We support a variety of standard compressors, such as Deflate for values, and Run-Length Encoding for indices. We also propose two novel compression schemes that achieve superior results: curve-fitting based for values, and bloom-filter based for indices. DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead. As a proof of concept, we implement our approach on TensorFlow and PyTorch. Our experiments with real models demonstrate that DeepReduce transmits 320% less data than existing sparsifiers, without affecting accuracy.
Original languageEnglish (US)
Title of host publication35th Conference on Neural Information Processing Systems, NeurIPS 2021
PublisherNeural information processing systems foundation
Pages21150-21163
Number of pages14
ISBN (Print)9781713845393
StatePublished - Jan 1 2021

Fingerprint

Dive into the research topics of 'DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning'. Together they form a unique fingerprint.

Cite this