TY - GEN
T1 - Scaling Distributed Machine Learning with In-Network Aggregation
AU - Sapio, Amedeo
AU - Canini, Marco
AU - Ho, Chen-Yu
AU - Nelson, Jacob
AU - Kalnis, Panos
AU - Kim, Changhoon
AU - Krishnamurthy, Arvind
AU - Moshref, Masoud
AU - Ports, Dan R. K.
AU - Richtarik, Peter
N1 - KAUST Repository Item: Exported on 2021-07-27
PY - 2021
Y1 - 2021
N2 - Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training
by up to 5.5⇥ for a number of real-world benchmark models.
AB - Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training
by up to 5.5⇥ for a number of real-world benchmark models.
UR - http://hdl.handle.net/10754/631179
UR - https://www.usenix.org/conference/nsdi21/presentation/sapio
M3 - Conference contribution
SP - 785
EP - 808
BT - 18th USENIX Symposium on Networked Systems Design and Implementation
ER -