Problems with (554) are required for CAAM 554 students, similarly for those with (454).
function [x,nrmF,iter] = myLBroyden(F,x0,tol,maxiter,L,prt,varargin) % % input: F = a function from R^n to R^n % x0 = initial guess in R^n % tol = tolerance > 0 % maxiter = maximum iteration number % L = number of pairs of saved update vectors % prt = switch for screen printout (1 on, 0 off) % varargin = paramaters required by F % % output: x = solution (root of F) % nrmF = a vector containing norm of {F(x_k)| k=1:iter} % iter = iteration number usedThe algorithm only keeps L pairs of update vectors, and restarts the inverse update from the identity after the L pair quota is reached.
function F = myHeqn(x,c,N)where c is a model parameters and N is the number of grid points.
function [x,nrmF,iter] = myBroyden(F,x0,tol,maxiter,prt,varargin) % % input: F = a function from R^n to R^n % x0 = initial guess in R^n % tol = tolerance > 0 % maxiter = maximum iteration number % varargin = paramaters required by F % % output: x = solution (root of F) % nrmF = a vector containing norm of {F(x_k)| k=1:iter} % iter = iteration number used--- Stop when the norm of F(x) is less than the tolerance tol.
function [x,iter,hist] = myBFGS(func,x0,tol,maxit,varargin) % % Quasi-Newton method using inverse BFGS update % (during backtracking, make sure you check y'*s > 0) % % Usage: % [x,iter,hist] = myBFGS(func,x0,tol,maxit,varargin); % % INPUT: % func - matlab function for f and g evaluations % x0 - starting point (column vector) % tol - stopping tolerance for gradient norm % maxit - maximum number of iterations % % OUTPUT: % x - computed approximate solution % iter - number of iterations taken % hist(1,:) - vector of function values at all iterations % hist(2,:) - vector of gradient norms at all iterations--- Use a multiple of identity matrix as the initial inverse Hessian approximation.
X = argmin {0.25||XXT - A||F2/||A||F2: X is n by k}.Note that your BFGS solver admits only vector-valued variables.
xk+1 = xk + (1 - ηk)pk, for some ηk in [0,η], 0 < η < 1,where pk is the Newton direction at xk.
[x,iter] = MyGN(func,x,tol,maxiter,prt,varargin);where func is a function name that returns [r,J], prt is a switch for printing iteration info or not, and varargin is a variable list of input arguments to be used by function func. The stopping criterion should be norm(g) < tol, where g = J'*r is the gradient vector.
Download and unzip the set of codes in nls.zip. Download the instructor's codes ZyGN.p and ZyLM.p. Study and run the test code testNLS.m. Without input, it runs the instructor's codes (CodeRange = 3:6) by default. We included 33 test problems, as described in the paper MGH.pdf. The test problems have already been coded. (A program FdNewton.m is included that implements Newton's method for solving nonlinear systems using a finite-difference scheme to approximate Jacobian. You may start your programming by studying and modifying this program.)
Submit your code and printout from the test program testNLS.m for the following two runs:
testNLS(1, 1.e-4,[1 3]); (easy run) testNLS(10,1.e-6,[1 3]); (hard run)
X = argmin {0.25||XXT - A||F2 : X is n by k}This is a nonlinear least squares problem. A Gauss-Newton method (without line search) takes the following form:
Y = X/(X'*X); Z = A*Y; X = Z - X*((Y'*Z-I)/2)where I is the identity. Write a function to implement the above scheme
[X,iter] = myGNex(A,k,tol,maxit)in which the stoping rule is that either the relative change (in Frobenius norm) in 2 consecutive iterations falls below tolerance, or maxit is reached.
function [x,iter,nf] = myCGD(func,method,x0,tol,maxit,varargin)(where method is a dummy parameter for now) to minimize a function defined by the function handle func starting from initial guess x0. Use back-tracking line search and choose your own parameters. The outputs include an approximate solution, the iteration number taken, and the number of function evaluations taken. You should mimic the printout format as in the instructor's code.
You interface func by
[f,g] = feval(func,x,varargin{:}); % function and gradient f = feval(func,x,varargin{:}); % function value onlyThe stopping criterion is either
crit := norm(g)/(1 + abs(f)) < tol;or the maximum iteration number maxit is reached.
Download the test scripts testDenoise.m and testDeblur.m, the functions TVL2N.m and TVL2B.m, and the instructor's code yzCGD.p. You can run the test scripts without your codes in place.
Run the test scripts with your code (you will need image processing toolbox). Submit your code and printout including graphs, and a short report (typed) explaining what you observe in these experiments.
function [x,iter,nf] = myNewton(func,x0,tol,maxit,varargin).Use the same stopping rule as above. The test function is a quartic f(x) = (x'*A*x)^2/4 + a*(sum(x) - n)^2/2.
(b) Modify your myCGD.m code to also allow the use of nonlinear CG methods, where method = 1,2,3,4 will correspond to GD, CG-PR, CG-PR+ and CG-FR, respectively. If a search direction is non-descent, then use the steepest descent direction instead.
Download the instructior's code yzNewton.p and yzAquartic.p, and the test script testAquartic.m, which can be run without your code. When your codes are ready, run the test script for n = 1000.
Submit your codes and the printout including graphs. Submit a short write-up (typed) explaining your observations in these experiments, such as convergence rates, with printout and other necessary supporting evidence.
Download the test scripts testGM2.m and testGM2a.m, and the instructor's code yzGMRES2.p. You can run these scripts without your code. When your code is ready, run testGM2 for m = 100 or larger (as large as you see fit), and also run testGM2a. Submit a copy of your program, the printputs (graph included) from your runs, and a short write-up (typed) explaining what you observe in the experiment (testGM2a).
Download the test script testGM1.m, the function fdm4pde1.m for matrix generation, and the instructor's GMRES code yzGMRES1.p. You can run the test script with or without your code. After you have written your code, start running the test script for a small m (say, m = 40) to debug and improve your code.
Submit your code and printout for runs with m = 100 (or slightly larger/smaller so your computer can handle it comfortably). Write a paragraph to summarize your observations.
Download the test script testGM1p.m and the instructor's code yzGMRES1p.p. You can run the script without your code. When your code is ready, run the test script for n = 1200 (but use smaller values in the debugging/testing phase). Submit a copy of your program, the printput (graph not necessary), and a short write-up (typed) explaining what you observe in this experiment.
Write a Matlab function:
Download the test script testSOR.m and the instructor's code zSOR.p. You can run testSOR(1) or testSOR(2), corresponding to 2 different matrices with default value at 1, with or without your code.
Stage 1 runs: After you have debugged your code, run "test_SOR.m" with small N values (say, N=50) and different ranges of omega within (0,2) to narrow down a good omega value (say, omega range = 0.8:0.1:1.9 and then 1.91:0.01:1.99).
Stage 2 runs: Run testSOR(1) and testSOR(2), using the estimated optimal omega values that you have found, with much larger N values (as long as your code/computer could comfortably handle). For matrix 1, try N >= 400; and for matrix 2, try N >= 800 or larger.
Submission:
--- a copy of your program (also upload code only to OwlSpace);
--- selected screen printout (and plots) for Stage 1 runs;
--- selected screen printout for Stage 2 runs;
--- a paragraph to summarize your observations and thoughts.
Prove in detail that GS converges for any initial guess if the matrix A is symmetric positive definite.
Excercises 4 and 5 in the lecture notes on Stationary Iterative Methods.
Write two Matlab functions:
to implement the Jacobi and the Gauss-Seidel methods, respectively, where maxit is the maximum number of iterations. Use the zero vector to start. The stopping criterion is that the relative residual in 2-norm, norm(b-A*x)/norm(b), becomes less than tol. The output info is a character string indicating the status of termination: either "Converged at iteration ?" or "Maxit reached". And rres is a vector containing relative residual norms, norm(b-A*x)/norm(b), at every iteration, counting from one up.
Download the instructor's codes zJacobi.p, zGS.p and the test script test1.m. Read the test script.
Submission:
--- For method = 'Jacobi', run test1.m for N = 50, 100, 200.
--- For method = 'GS', run test1.m for N = 50, 100, 200.
--- Submit screen output and Figure 1 of every run.
--- Submit a copy of your programs (also upload code only to OwlSpace).
Prove in detail that the Gauss-Saidel method converges from any initial guess if the matrix A is column strictly diagonally dominant.
Prove that the scheme "x ← x + α*(b-Ax)" converges from any initial guess if the matrix A is symmetric positive definite and α>0 is less than 2 divided by the maximum eigenvalue of A.
Prove that in any matrix norm ||.||, ||Ak||1/k ---> ρ(A) as k goes to infinity.