Asynchronous decentralized parallel stochastic gradient descent
Abstract
Most commonly used distributed machine learning systems are either synchronous or centralized asynchronous. Synchronous algorithms like AllReduce- SGD perform poorly in a heterogeneous environment, while asynchronous algorithms using a parameter server suffer from 1) communication bottleneck at parameter servers when workers are many, and 2) significantly worse convergence when the traffic to parameter server is congested. Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate? In this paper, we propose an asynchronous decentralized stochastic gradient decent algorithm (AD-PSGD) satisfying all above expectations. Our theoretical analysis shows AD-PSGD converges at the optimal 0{l/K) rate as SGD and has linear speedup w.r.t. number of workers. Empirically, AD- PSGD outperforms the best of decentralized parallel SGD (D-PSGD), asynchronous parallel SGD (A- PSGD), and standard data parallel SGD (AllReduce- SGD), often by orders of magnitude in a heterogeneous environment. When training ResNet-50 on Im- ageNet with up to 128 GPUs, AD-PSGD converges (w.r.t epochs) similarly to the AllReduce-SGD, but each epoch can be up to 4-8 x faster than its synchronous counterparts in a network-sharing HPC environment.