## Contents

```function [x,out] = lbreg_bbls(A,b,alpha,opts)
```
```% lbreg_bbls: linearized Bregman iteration with Barzilai-Borwen (BB) stepsize and nonmonotone line search
%   minimize |x|_1 + 1/(2*alpha) |x|_2^2
%   subject to Ax = b
%
% input:
%       A: constraint matrix
%       b: constraint vector
%       alpha: smoothing parameter, typical value: 1 to 10 times estimated norm(x,inf)
%       opts.
%           stepsize: dual ascent default stepsize, see below
%           tol: stop iteration once norm(Ax-b)<tol*norm(b), default: 1e-4
%           maxit: max number of iterations, default: 3000
%           maxT:  max running time in second, default: 1000
%           x_ref: if provided, norm(x^k - x_ref) is computed and saved in out.hist_err, default: []
%
% output:
%       x: solution vector
%       out.
%           iter: number of iterations
%           hist_obj: objective value at each iteration
%           hist_res: |Ax-b|_2 at each iteration
%           hist_err: if opts.x_ref is provided, contains norm(x^k - x_ref); otherwise, will be set to []
%
% Algorithm:
%   Linearized Bregman is the dual gradient ascent iteration.
%   The dual problem is: b'y - alpha/2 |shrink(A'y,1)|^2,  where shrink(z,1) = z - proj_[-1,1](z) = sign(z).*max(abs(z)-1,0)
%   Let y be the dual variable. The iteration is
%     y^{k+1} = y^k + stepsize (b - alpha A shrink(A'y^k,1))
%   Promal variable x is obtained as x^k = alpha shrink(A'y^k,1)
%
% How to set alpha:
%   There exists alpha0 so that any alpha >= alpha0 gives the solution to minimize |x|_1 subject to Ax = b.
%   The alpha depends on the data, but a typical value is 1 to 10 times the estimate of norm(solution_x,inf)
%
% How to set opts.stepsize:
%   This value is the default stepsize of dual gradient ascent.
%   A very conservative value is 2/alpha/norm(A)^2, which guarantees convergence.
%   If norm(A)^2 is expensive to compute, one can estimate norm(A*A') or norm(A'*A) (choose the easier one!),
%   or use the method in arXiv:1104.2076.
%   If opts.stepsize is not set, then the code uses 2/alpha/normest(A*A',1e-1)
%
% Stepsize:
%   Since iteration 2, the dual ascent stepsize is determined by Barzilai-Borwein's method and nonmonotone line search
%
% How is the algorithm stopped: see "% stopping" below
%
% http://www.caam.rice.edu/~optimization/linearized_bregman
```

## Parameters and defaults

```if isfield(opts,'stepsize'),  stepsize0 = opts.stepsize;  else stepsize0 = 2/alpha/normest(A*A',1e-1); end
if isfield(opts,'tol'),    tol = opts.tol;     else tol = 1e-4;   end
if isfield(opts,'maxit'),  maxit = opts.maxit; else maxit = 500;  end
if isfield(opts,'maxT'),   maxT = opts.maxT;   else maxT = 1e3;   end
if isfield(opts,'x_ref'),  x_ref = opts.x_ref; else x_ref = []; out.out.hist_err = []; end
```

## Data preprocessing and initialization

```[m,n] = size(A);

y = zeros(m,1);
ATy = zeros(n,1); % keeps A'y, which is frequently used in the algorithm
res = b; % residual (b - Ax)
norm_b = norm(b);

shrink = @(z) sign(z).*max(0,abs(z)-1);

% parameters for nonmonotone line search
lsObj = 0;
lsQ = 1;
```

## Main iterations

```start_time = tic;

for k = 1:maxit

% --- Barzilai-Borwen (BB) stepsize ---
if k > 1;
y_diff = y - yp;
stepsize = y_diff.'*y_diff / (y_diff.'*(resp-res));
if isinf(stepsize); stepsize = stepsize0; end % in case resp-res=0, use the default stepsize
else
stepsize = stepsize0 + 1/max(abs(b.'*A)); % the 2nd part is the stepsize to make x nonzero
end

% --- y-update ---
yp = y; ATyp = ATy; % save prior-update y and A'y for future computation of BB stepize and line search
y = y + stepsize * res; % res = (b - Ax) = dual objective gradient
ATy = (y.'*A).';
ATy_diff = ATy - ATyp; % created to save computation during backtracking

% --- x-update ---
x = alpha * shrink(ATy);
obj = b.'*y - norm(x)^2/alpha/2; % dual objective

% --- nonmonotone line search ---
% initialization
lsRho = 1; % discount factor
lsConst = 1e-3*stepsize*(norm(res)^2); % discounted expected amount of ascent
% back tracking
while obj < lsObj + lsRho * lsConst % while (not enough ascent)
lsRho = lsRho * 0.5; % reduce discount factor
y = yp + (lsRho*stepsize) * res; % apply reduced stepsize
% computing updated dual objective
ATy = ATyp + lsRho * ATy_diff;
x = alpha * shrink(ATy);
obj = b.'*y - norm(x)^2/alpha/2;
end
% update averaged objective - lsObj - for next line search
lsQp = lsQ; lsQ = 0.85*lsQp + 1;
lsObj = (0.85*lsQp*lsObj + obj)/lsQ;

% --- update other quantities ---
Ax = A*x;
resp = res; % save prior-update res for next BB
res = b - Ax; % res will be used in next BB and y-update

% --- diagnostics, reporting, stopping checks ---
% reporting
out.hist_obj(k) = obj; % dual objective
out.hist_res(k) = norm(res); % residual size |b - Ax|_2
out.hist_stp(k) = stepsize;
if ~isempty(x_ref); out.hist_err(k) = norm(x - x_ref); end

% stopping
if toc(start_time) > maxT; break; end; % running time check
if k > 1 && norm(res) < tol*norm_b; break; end % relative primal residual check
end

out.iter = k;
```