About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INFORMS 2021
Talk
Asynchronous decentralized accelerated stochastic gradient descent
Abstract
In this talk, we introduce an asynchronous decentralized accelerated stochastic gradient descent algorithm for decentralized stochastic optimization. Considering communication and synchronization costs are the major bottlenecks, we attempt to reduce these costs via randomization techniques. Our major contribution is to develop a class of accelerated randomized decentralized algorithms for solving general convex composite problems. We establish O(1/ε) (resp., O(1/√ε)) communication complexity and O(1/ε2) (resp., O(1/ε)) sampling complexity for solving general convex (resp., strongly convex) problems. It worths mentioning that our proposing algorithm only sublinear depends on the Lipschitz constant if there is a smooth component presented in the objective function.