Scaling Distributed Machine Learning with In-Network Aggregation

Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtarik

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training by up to 5.5⇥ for a number of real-world benchmark models.
Original languageEnglish (US)
Title of host publication18th USENIX Symposium on Networked Systems Design and Implementation
Pages785-808
Number of pages24
StatePublished - 2021

Fingerprint

Dive into the research topics of 'Scaling Distributed Machine Learning with In-Network Aggregation'. Together they form a unique fingerprint.

Cite this