Contents
function [X,out] = lbreg_mtx_bbls(A,AT,b,alpha,opts)
% lbreg_mtx_bbls: linearized Bregman iteration with Barzilai-Borwen (BB) stepsize and nonmonotone line search % minimize |X|_* + 1/(2*alpha) |X|_F^2 % subject to A(X) = b % % input: % A: linear matrix operator, as a function handle % AT: the ajoint operator of A, as a function handle % b: constraint vector % alpha: smoothing parameter, typical value: 1 to 10 times estimated norm(X), the spectral norm of X % opts. % stepsize: dual ascent default stepsize, see below % tol: stop iteration once norm(Ax-b)<tol*norm(b), default: 1e-4 % maxit: max number of iterations, default: 3000 % maxT: max running time in second, default: 1000 % X_ref: if provided, norm(X^k - X_ref,'fro') is computed and saved in out.hist_err, default: [] % % output: % X: solution matrix % out. % iter: number of iterations % hist_obj: objective value at each iteration % hist_res: |A(X)-b|_2 at each iteration % hist_err: if opts.X_ref is provided, contains norm(X^k - X_ref,'fro'); otherwise, will be set to [] % % Algorithm: % Linearized Bregman is the dual gradient ascent iteration. % The dual problem is: b'y - alpha/2 |shrink(A'y,1)|^2, where shrink(z,1) = z - proj_[-1,1](z) = sign(z).*max(abs(z)-1,0) % Let y be the dual variable. The iteration is % y^{k+1} = y^k + stepsize (b - alpha A shrink(A'y^k,1)) % Promal variable X is obtained as X^k = alpha shrink(A'y^k,1) % % How to set alpha: % There exists alpha0 so that any alpha >= alpha0 gives the solution to minimize |X|_* subject to A(X) = b. % The alpha depends on the data, but a typical value is 1 to 10 times the estimate of norm(solution_X,2), the spectral norm of solution X % % How to set opts.stepsize: % Too large will cause divergence; too small will cause slow convergence. % A conservative value is 1.99/alpha/norm(A)^2, which guarantees convergence. % If norm(A)^2 is expensive to compute, one can compute norm(A*A') or norm(A'*A) (choose the easier one!), % or use the method in arXiv:1104.2076. % % Stepsize: % Since iteration 2, the dual ascent stepsize is determined by Barzilai-Borwein's method and nonmonotone line search % % How is the algorithm stopped: see "% stopping" below % % More information can be found at % http://www.caam.rice.edu/~optimization/linearized_bregman
Parameters and defaults
if isfield(opts,'stepsize'), stepsize0 = opts.stepsize; else stepsize0 = 2/alpha; end % the default one assumes norm(A)=1 if isfield(opts,'tol'), tol = opts.tol; else tol = 1e-4; end if isfield(opts,'maxit'), maxit = opts.maxit; else maxit = 500; end if isfield(opts,'maxT'), maxT = opts.maxT; else maxT = 1e3; end if isfield(opts,'X_ref'), X_ref = opts.X_ref; else X_ref = []; out.out.hist_err = []; end
Data preprocessing and initialization
m = length(b); y = zeros(m,1); ATy = AT(y); % keeps A'y, which is frequently used in the algorithm res = b; % residual (b - Ax) norm_b = norm(b); % parameters for nonmonotone line search lsObj = 0; lsQ = 1;
Main iterations
start_time = tic; for k = 1:maxit % --- Barzilai-Borwen (BB) stepsize --- if k > 1; y_diff = y - yp; stepsize = y_diff.'*y_diff / (y_diff.'*(resp-res)); if isinf(stepsize); stepsize = stepsize0; end % in case resp-res=0, use the default stepsize else stepsize = stepsize0 + 1/max(abs(reshape(AT(b),[],1))); % the 2nd part is the stepsize to make X nonzero end % --- y-update --- yp = y; ATyp = ATy; % save prior-update y and A'y for future computation of BB stepize and line search y = y + stepsize * res; % res = (b - Ax) = dual objective gradient ATy = AT(y); ATy_diff = ATy - ATyp; % created to save computation during backtracking % --- X-update --- X = alpha * singular_value_softthresholding(AT(y),1); obj = b.'*y - norm(X,'fro')^2/alpha/2; % dual objective % --- nonmonotone line search --- % initialization lsRho = 1; % discount factor lsConst = 1e-3*stepsize*(norm(res)^2); % discounted expected amount of ascent % back tracking while obj < lsObj + lsRho * lsConst % while (not enough ascent) lsRho = lsRho * 0.5; % reduce discount factor y = yp + (lsRho*stepsize) * res; % apply reduced stepsize % computing updated dual objective ATy = ATyp + lsRho * ATy_diff; X = alpha * singular_value_softthresholding(ATy,1); obj = b.'*y - norm(X,'fro')^2/alpha/2; end % update averaged objective - lsObj - for next line search lsQp = lsQ; lsQ = 0.85*lsQp + 1; lsObj = (0.85*lsQp*lsObj + obj)/lsQ; % --- update other quantities --- AX = A(X); resp = res; % save prior-update res for next BB res = b - AX; % res will be used in next BB and y-update % --- diagnostics, reporting, stopping checks --- % reporting out.hist_obj(k) = obj; % dual objective out.hist_res(k) = norm(res); % residual size |b - Ax|_2 out.hist_stp(k) = stepsize; if ~isempty(X_ref); out.hist_err(k) = norm(X - X_ref,'fro'); end % stopping if toc(start_time) > maxT; break; end; % running time check if k > 1 && norm(res) < tol*norm_b; break; end % relative primal residual check end out.iter = k;
end % end of main function
Subfunction for singular value soft-thresholding
function res = singular_value_softthresholding(X, mu) [U,S,V] = svd(X,'econ'); s = max(0,abs(diag(S))-mu); res = U*diag(s)*V'; end