Problems starting with (554) will be required only from CAAM 554 students (similarly for those with (454)).
Y = X/(X'*X);
P = (I - 0.5*X*Y')*(A*Y - X);
X = X + P;
--- (10 pts) Write a function to implement the above scheme
[X,iter] = myGN4real(A,k,tol,maxit)that solves the NLS problem. Optimize your code as much as you can. Stop the iterations when relchg = norm(P,'fro)/norm(X,'fro') is less than tol. Mimic the on-screen output format of the instructor's code.
function [x,iter,hist] = myBFGS(func,x0,tol,maxit,varargin) % % Quasi-Newton method using inverse BFGS update % (if you do backtracking, make sure you check y'*s > 0) % % Usage: % [x,iter,hist] = myBFGS(func,x0,tol,maxit,varargin); % % INPUT: % func - matlab function for f and g evaluations % x0 - starting point (column vector) % tol - stopping tolerance for gradient norm % maxit - maximum number of iterations % % OUTPUT: % x - computed approximate solution % iter - number of iterations taken % hist(1,:) - vector of function values at all iterations % hist(2,:) - vector of gradient norms at all iterations--- Use a multiple of identity matrix as the initial inverse Hessian approximation.
X = argmin {0.25||XXT - A||F2/||A||F2: X is n by k}
--- Write a function [f,g] = mylork(x,n,k,A,nrmA2)
where the small x is the vectorized big X
and nrmA2 = norm(A,'fro')^2. G = X*(X'*X) - A*X
X = X - a * G / (X'*X)
--- Write a function to implement the above scheme
[X,iter] = myGNes(A,k,tol,maxit)that solves the NLS problem (you can assume norm(A,'fro')=1).
xk+1 = xk + (1 - ηk)pk, for some ηk in [0,η], 0 < η < 1,where pk is the Newton direction at xk.
[x,iter] = MyGN(func,x,tol,maxiter,prt,varargin);
where func is a function name that returns [r,J],
prt is a switch for printing iteration info or not,
and varargin is a variable list of input arguments
to be used by function func. The stopping criterion
should be norm(g) < tol, where g = J'*r is the
gradient vector.
Download and unzip the set of codes in nls.zip. Download the instructor's codes ZyGN.p and ZyLM.p. Study and run the test code testNLS.m. Without input, it runs the instructor's codes (CodeRange = 3:6) by default. We included 33 test problems, as described in the paper MGH.pdf. The test problems have already been coded. (A program FdNewton.m is included that implements Newton's method for solving nonlinear systems using a finite-difference scheme to approximate Jacobian. You may start your programming by studying and modifying this program.)
Submit your code and printout from the test program testNLS.m for the following two runs:
testNLS(1, 1.e-4,[1 3]); (easy run)
testNLS(10,1.e-6,[1 3]); (hard run)
function [x,iter,nf] = myCGD(func,method,x0,tol,maxit,varargin)(where method is a dummy parameter for now) to minimize a function defined by the function handle func starting from initial guess x0. Use back-tracking line search and choose your own parameters. The outputs include an approximate solution, the iteration number taken, and the number of function evaluations taken. You should mimic the printout format as in the instructor's code.
You interface func by
[f,g] = feval(func,x,varargin{:}); % function and gradient
f = feval(func,x,varargin{:}); % function value only
The stopping criterion is either
crit := norm(g)/(1 + abs(f)) < tol;or the maximum iteration number maxit is reached.
Download the test scripts testDenoise.m and testDeblur.m, the functions TVL2N.m and TVL2B.m, and the instructor's code yzCGD.p. You can run the test scripts without your codes in place.
Run the test scripts with your code (you will need image processing toolbox). Submit your code and printout including graphs, and a short report (typed) explaining what you observe in these experiments.
function [x,iter,nf] = myNewton(func,x0,tol,maxit,varargin).Use the same stopping rule as above. The test function is a quartic f(x) = (x'*A*x)^2/4 + a*(sum(x) - n)^2/2.
(b) Modify your myCGD.m code to also allow the use of nonlinear CG methods, where method = 1,2,3,4 will correspond to GD, CG-PR, CG-PR+ and CG-FR, respectively. If a search direction is non-descent, then use the steepest descent direction instead.
Download the instructior's code yzNewton.p and yzAquartic.p, and the test script testAquartic.m, which can be run without your code. When your codes are ready, run the test script for n = 1000.
Submit your codes and the printout including graphs, and a short write-up (typed) explaining what you observe in these experiments. Submit a short write-up (typed) explaining your observations in these experiments, such as convergence rates, with printout and other necessary supporting evidence.
Download the test script testGM1.m, the function fdm4pde1.m for matrix generation, and the instructor's GMRES code yzGMRES1.p. You can run the test script with or without your code. After you have written your code, start running the test script for a small m (say, m = 40) to debug and improve your code.
Submit your code and printout for runs with m = 80 or m = 100 (or even larger if you could handle it comfortably). Write a paragraph to summarize your observations.
Download the test script testGM1p.m and the instructor's code yzGMRES1p.p. You can run the script without your code. When your code is ready, run the test script for n = 500 or larger (but use smaller values in the testing phase). Submit a copy of your program, the printput (graph not necessary), and a short write-up (typed) explaining what you observe in this experiment.
Download the test scripts testGM2.m and testGM2a.m, and the instructor's code yzGMRES2.p. You can run these scripts without your code. When your code is ready, run testGM2 for m = 100 or larger (as large as you see fit), and also run testGM2a. Submit a copy of your program, the printputs (graph included) from your runs, and a short write-up (typed) explaining what you observe in the experiment (testGM2a).
Download the test script test_SOR.m and the instructor's code zSOR.p. You can run test_SOR(1) or test_SOR(2), corresponding to 2 different matrices with default value at 1, with or without your code.
After you have debugged your code, run "test_SOR.m" with small N values and different ranges of omega to narrow down a good omega value (for example, you might try N = 50 and omega range = 0.8:0.1:1.9, and then 1.91:0.01:1.99 for matrix 1).
Run test_SOR(1) and test_SOR(2), using the best omega values that you have estimated, with much larger N values (as large as your code/computer could comfortably deal with).
Submit a copy of your program, a set of plots for initial runs, and the screen printout for your final runs. Write a paragraph to summarize your observations and thoughts.
to implement the Jacobi and Gauss-Seidel methods, respectively, where maxit is the maximum number of iterations. Use the zero vector to start. The stopping criterion is that the residual 2-norm, norm(b-A*x), becomes less than tol*(1 + norm(b)). The output info is a character string indicating the status of termination: either "Converged at iteration ?" or "Maxit reached". And res is a vector containing residual norms, norm(b-A*x), for all iterations, counting iterations from 1.
Download the instructor's codes zJacobi.p, zGS.p and the test script test1.m. Read the test script.
Run test1.m for N = 50, 100, 200 (or larger if so desired), and with both method = 'Jacobi' and method = 'GS'. Submit your programs and output from these runs.