About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ACSSC 2020
Conference paper
Distributed Prediction-Correction ADMM for Time-Varying Convex Optimization
Abstract
This paper introduces a dual-regularized ADMM approach to distributed, time-varying optimization. The proposed algorithm is designed in a prediction-correction framework, in which the computing nodes predict the future local costs based on past observations, and exploit this information to solve the time-varying problem more effectively. In order to guarantee linear convergence of the algorithm, a regularization is applied to the dual, yielding a dual-regularized ADMM. We analyze the convergence properties of the time-varying algorithm, as well as the regularization error of the dual-regularized ADMM. Numerical results show that in time-varying settings, despite the regularization error, the performance of the dual-regularized ADMM can outperform inexact gradient-based methods, as well as exact dual decomposition techniques, in terms of asymptotical error and consensus constraint violation.