Kimad: Adaptive Gradient Compression with Bandwidth Awareness

Jihao Xin, Ivan Ilin, Shunkang Zhang, Marco Canini, Peter Richtárik

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

In distributed training, communication often emerges as a bottleneck. In response, we introduce Kimad, a solution that offers adaptive gradient compression. By consistently monitoring bandwidth, Kimad refines compression ratios to match specific neural network layer requirements. Our exhaustive tests and proofs confirm Kimad's outstanding performance, establishing it as a benchmark in adaptive compression for distributed deep learning.

Original languageEnglish (US)
Title of host publicationDistributedML 2023 - Proceedings of the 4th International Workshop on Distributed Machine Learning
PublisherAssociation for Computing Machinery, Inc
Pages35-48
Number of pages14
ISBN (Electronic)9798400704475
DOIs
StatePublished - Dec 8 2023
Event4th International Workshop on Distributed Machine Learning, DistributedML 2023 - Paris, France
Duration: Dec 8 2023 → …

Publication series

NameDistributedML 2023 - Proceedings of the 4th International Workshop on Distributed Machine Learning

Conference

Conference4th International Workshop on Distributed Machine Learning, DistributedML 2023
Country/TerritoryFrance
CityParis
Period12/8/23 → …

Keywords

  • distributed training
  • gradient compression

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Kimad: Adaptive Gradient Compression with Bandwidth Awareness'. Together they form a unique fingerprint.

Cite this