Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning

Rishikesh R. Gajjala, Shashwat Banchhor, Ahmed M. Abdelmoniem, Aritra Dutta, Marco Canini, Panos Kalnis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

Distributed stochastic algorithms, equipped with gradient compression techniques, such as codebook quantization, are becoming increasingly popular and considered state-of-the-art in training large deep neural network (DNN) models. However, communicating the quantized gradients in a network requires efficient encoding techniques. For this, practitioners generally use Elias encoding-based techniques without considering their computational overhead or data-volume. In this paper, based on Huffman coding, we propose several lossless encoding techniques that exploit different characteristics of the quantized gradients during distributed DNN training. Then, we show their effectiveness on 5 different DNN models across three different data-sets, and compare them with classic state-of-the-art Elias-based encoding techniques. Our results show that the proposed Huffman-based encoders (i.e., RLH, SH, and SHS) can reduce the encoded data-volume by up to 5.1×, 4.32×, and 3.8×, respectively, compared to the Elias-based encoders.
Original languageEnglish (US)
Title of host publicationProceedings of the 1st Workshop on Distributed Machine Learning
PublisherACM
ISBN (Print)9781450381826
DOIs
StatePublished - Dec 2020

Fingerprint

Dive into the research topics of 'Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning'. Together they form a unique fingerprint.

Cite this