On the global and linear convergence of the generalized alternating direction method of multipliers (ADMM)
Submitted for publication
Overview
A large number of optimization problems can be reformulated into the form of
where and are extended-value convex functions. On problems of this form,
a very effective approach is the alternating direction method of
multipliers (ADM or ADMM), which solves a sequence of subproblems involving
and one at a time. However, its effectiveness has not been
matched by a provably fast rate of convergence;
only sublinear rates such as and were established in the
literature.
In many common problems, one of the two objective functions is strictly convex and
has Lipschitz continuous gradient. This paper shows that global linear convergence can be guaranteed
in this case, along with certain rank
assumptions on and . The result applies to the generalized ADMM that allows the subproblems to be solved faster and less
exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADMM
parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of the generalized
ADMM.
Citation
W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers, Rice CAAM technical report 12-14, 2012.
« Back
|