Richard G. Carter
We consider a vector valued function g(x) with n components for which the Jacobian is symmetric and sparse. Such functions often arise, for instance, in numerical optimization, where g is the gradient of some objective function f(x) so that the Jacobian of g(x) is the Hessian of f(x). In many such applications one can generate extremely efficient algorithms by taking advantage of the sparsity structure of the problem if this pattern is known a priori. Unfortunately, determining such sparsity structures by hand is often difficult and prone to error. If one suspects a mistake has been made, or if g(x) is a ``black box'' so that the true structure is completely unknown, one often has no alternative but to compute the entire matrix by finite differences --- a prohibitively expensive task for large problems.
We show that it is possible to numerically determine symmetric sparsity patterns using a relatively small number of g(x) evaluations. Numerical results are shown for n up to 100,000 in which all nonzeros in the Jacobian are correctly identified in about one-hundredth of the time required to estimate the sparsity structure by a full finite difference calculation. When a good initial guess for the sparsity structure is available, numerical results are presented for n up to 500,000, in which all missing nonzeros are correctly located almost five-thousand times faster than would be possible with a full finite difference calculation.
Click here to receive a gnuzipped postscript copy of this paper (125K).
As a courtesy to the author, please send a brief email note to
saying that you have retrieved the MCS-0992 paper. If you are
feeling really energetic, you can also give a brief description of
your proposed application.