## Contents

```function [X,out] = lbreg_mtx_fixedstep(A,AT,b,alpha,opts)
```
```% lbreg_mtx_fixedstep: linearized Bregman iteration with fixed stepsize
%   minimize |X|_* + 1/(2*alpha) |X|_F^2
%   subject to A(X) = b
%
% input:
%       A: linear matrix operator, as a function handle
%       AT: the ajoint operator of A, as a function handle
%       b: constraint vector
%       alpha: smoothing parameter, typical value: 1 to 10 times estimated norm(X,2), the spectral norm of X
%       opts.
%           stepsize: dual ascent stepsize, see below
%           tol: stop iteration once norm(Ax-b)<tol*norm(b), default: 1e-4
%           maxit: max number of iterations, default: 3000
%           maxT:  max running time in second, default: 1000
%           X_ref: if provided, norm(X^k - X_ref,'fro') is computed and saved in out.hist_err, default: []
%
% output:
%       X: solution matrix
%       out.
%           iter: number of iterations
%           hist_obj: objective value at each iteration
%           hist_res: |A(X)-b|_2 at each iteration
%           hist_err: if opts.X_ref is provided, contains norm(X^k - X_ref,'fro'); otherwise, will be set to []
%
% Algorithm:
%   Linearized Bregman is the dual gradient ascent iteration.
%   The dual problem is: b'y - alpha/2 |shrink(A'y,1)|^2,  where shrink(z,1) = z - proj_[-1,1](z) = sign(z).*max(abs(z)-1,0)
%   Let y be the dual variable. The iteration is
%     y^{k+1} = y^k + stepsize (b - alpha A shrink(A'y^k,1))
%   Promal variable X is obtained as X^k = alpha shrink(A'y^k,1)
%
% How to set alpha:
%   There exists alpha0 so that any alpha >= alpha0 gives the solution to minimize |X|_* subject to A(X) = b.
%   The alpha depends on the data, but a typical value is 1 to 10 times the estimate of norm(solution_X,2), the spectral norm of solution X
%
% How to set opts.stepsize:
%   Too large will cause divergence; too small will cause slow convergence.
%   A conservative value is 1.99/alpha/norm(A)^2, which guarantees convergence.
%   If norm(A)^2 is expensive to compute, one can compute norm(A*A') or norm(A'*A) (choose the easier one!),
%   or use the method in arXiv:1104.2076.
%
% How is the algorithm stopped: see "% stopping" below
%
% http://www.caam.rice.edu/~optimization/linearized_bregman
```

## Parameters and defaults

```if isfield(opts,'stepsize'),  stepsize = opts.stepsize;  else stepsize = 1.99/alpha; end % the default one assumes norm(A)=1
if isfield(opts,'tol'),    tol = opts.tol;     else tol = 1e-4;   end
if isfield(opts,'maxit'),  maxit = opts.maxit; else maxit = 500;  end
if isfield(opts,'maxT'),   maxT = opts.maxT;   else maxT = 1e3;   end
if isfield(opts,'X_ref'),  X_ref = opts.X_ref; else X_ref = []; out.out.hist_err = [];  end
```

## Data preprocessing and initialization

```m = length(b);

y = zeros(m,1);
res = b; % residual (b - Ax)
norm_b = norm(b);
```

## Main iterations

```start_time = tic;

for k = 1:maxit

% --- y-update ---
y = y + stepsize * res; % for k=1, (stepsize + 1/max(abs(b.'*A))) can be used as stepsize

% --- X-update ---
X = alpha * singular_value_softthresholding(AT(y),1);
AX = A(X);
res = b - AX; % res will be used in next y-update

% --- diagnostics, reporting, stopping checks ---
% reporting
out.hist_obj(k) = b.'*y - norm(X,'fro')^2/alpha/2; % dual objective
out.hist_res(k) = norm(res); % residual size |b - Ax|_2
if ~isempty(X_ref); out.hist_err(k) = norm(X - X_ref,'fro'); end

% stopping
if toc(start_time) > maxT; break; end; % running time check
if k > 1
% dual objective check
obj_diff = out.hist_obj(k) - out.hist_obj(k-1);
if obj_diff < -1e-10; error('dual objective decreases; opts.stepsize is too large!'); end
% primal residual check
if norm(res) < tol*norm_b; break; end
end
end

out.iter = k;
```
```end % end of main function
```

## Subfunction for singular value soft-thresholding

```function res = singular_value_softthresholding(X, mu)

[U,S,V] = svd(X,'econ');
s = max(0,abs(diag(S))-mu);
res = U*diag(s)*V';

end
```