On the global and linear convergence of the generalized alternating direction method of multipliers (ADMM)

W. Deng and W. Yin

Submitted for publication

Overview

A large number of optimization problems can be reformulated into the form of

min_{x,y} f(x)+g(y),quadmbox{s.t.}~Ax+By = b,

where f and g are extended-value convex functions. On problems of this form, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of subproblems involving f and g one at a time. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as O(1/k) and O(1/k^2) were established in the literature.

In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. This paper shows that global linear convergence can be guaranteed in this case, along with certain rank assumptions on A and B. The result applies to the generalized ADMM that allows the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADMM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of the generalized ADMM.

Citation

W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, Rice CAAM technical report 12-14, 2012.


« Back