TY - GEN
T1 - DeepReduce: A Sparse-tensor Communication Framework for Federated Deep Learning
AU - Xu, Hang
AU - Kostopoulou, Kelly
AU - Dutta, Aritra
AU - Li, Xin
AU - Ntoulas, Alexandros
AU - Kalnis, Panos
N1 - KAUST Repository Item: Exported on 2022-07-01
Acknowledgements: Kelly Kostopoulou was supported by the KAUST Visiting Student Research Program. The computing infrastructure was provided by the KAUST Super-computing Lab (KSL).
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network's gradients, or, as a result of an explicit sparsification process. Existing communication primitives are agnostic to the challenges of deep learning; consequently, they impose unnecessary communication overhead. This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored to federated deep learning. DeepReduce decomposes sparse tensors into two sets, values and indices, and allows both independent and combined compression of these sets. We support a variety of standard compressors, such as Deflate for values, and Run-Length Encoding for indices. We also propose two novel compression schemes that achieve superior results: curve-fitting based for values, and bloom-filter based for indices. DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead. As a proof of concept, we implement our approach on TensorFlow and PyTorch. Our experiments with real models demonstrate that DeepReduce transmits 320% less data than existing sparsifiers, without affecting accuracy.
AB - Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network's gradients, or, as a result of an explicit sparsification process. Existing communication primitives are agnostic to the challenges of deep learning; consequently, they impose unnecessary communication overhead. This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors, tailored to federated deep learning. DeepReduce decomposes sparse tensors into two sets, values and indices, and allows both independent and combined compression of these sets. We support a variety of standard compressors, such as Deflate for values, and Run-Length Encoding for indices. We also propose two novel compression schemes that achieve superior results: curve-fitting based for values, and bloom-filter based for indices. DeepReduce is orthogonal to existing gradient sparsifiers and can be applied in conjunction with them, transparently to the end-user, to significantly lower the communication overhead. As a proof of concept, we implement our approach on TensorFlow and PyTorch. Our experiments with real models demonstrate that DeepReduce transmits 320% less data than existing sparsifiers, without affecting accuracy.
UR - http://hdl.handle.net/10754/667496
UR - https://proceedings.neurips.cc/paper/2021/file/b0ab42fcb7133122b38521d13da7120b-Paper.pdf
UR - http://www.scopus.com/inward/record.url?scp=85132359973&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781713845393
SP - 21150
EP - 21163
BT - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
PB - Neural information processing systems foundation
ER -