Using Simplex Gradients of Nonsmooth Functions in Direct Search Methods
A. L. Custodio, J.E. Dennis Jr., & L.N. Vicente
It has been shown recently that the efficiency of direct search methods that use opportunistic polling in positive spanning directions can be improved significantly by reordering the poll directions according to descent indicators built from simplex gradients.
The purpose of this paper is twofold. First, we analyze the properties of simplex gradients of nonsmooth functions in the context of direct search methods like the Generalized Pattern Search (GPS) and the Mesh Adaptive Direct Search (MADS), for which there exists a convergence analysis in the nonsmooth setting. Our analysis does not require continuous differentiability and can be seen as an extension of the accuracy properties of simplex gradients known for smooth functions. Secondly, we test the use of simplex gradients when pattern search is applied to nonsmooth functions, confirming the merit of the poll ordering strategy for such problems.
Key words: Derivative free optimization, simplex gradients, poisedness, nonsmooth analysis, generalized pattern search methods, mesh adaptive direct search.
Derivatives By-Address for Fortran 77
Automatic differentiation tools use 1 of 2 strategies to access derivative values. These strategies are:
The by-address method is typically implemented by introducing structured types for each active scalar type. For example, scalar type 'real' will have an associated structure type 'active-real'. Using this strategy, all active variables types are changed to the associated structured type. On the other hand, the by-name method introduces an associated new variable for each active variable. For example, the derivatives associated with 'pressure' would be 'd_pressure'. Since the by-address strategy employs structured types, AD tools for Fortran 77 have not employed that strategy. In this paper, we show how to use array access to implement the by-address strategy for Fortran 77. We discuss the canonicalization issues, outline our Adifor3.0 implementation of this technique, and give a few sample performance
Optimal Scaling For Reverse-Time Migration
William W. Symes
Reverse Time Migration with Optimal Checkpointing
William W. Symes
The optimal checkpointing algorithm (Griewank and Walther, 2000) minimizes the computational complexity of the adjoint state method. Applied to reverse time migration, optimal checkpointing eliminates (or at least drastically reduces) the need for disk i/o, which is quite extensive in more straightforward implementations. This paper describes optimal checkpointing in a form which applies both to reverse time migration and to other applications of the adjoint state method, such as construction of velocity updates from prestack wave equation migration.
Characteristic Shape Sequences for Measures on Images
Rachael L. Pingel, Mark A. Abramson, Thomas J. Asaki, & J. E. Dennis, Jr.
Researchers in many fields often need to quantify the similarity between images using metrics that measure qualities of interest in a robust quantitative manner. We present here the concept of image dimension reduction through characteristic shape sequences. We formulate the problem as a nonlinear optimization program and demonstrate the solution on a test problem of extracting maximal area ellipses from two-dimensional image data. To solve the problem numerically, we augment the class of mesh adaptive direct search (MADS) algorithms with a filter, so as to allow infeasible starting points and to achieve better local solutions. Results here show that the MADS filter algorithm is successful in the test problem of finding good characteristic ellipse solutions from simple but noisy images.
The Total Variation Regularized L1 Model for Multiscale Decomposition
Wotao Yin, Donald Goldfarb, & Stanley Osher
This paper studies the total variation regularization model with an L1 fidelity term (TV-L1) for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV-L1 model is able to separate image features according to their scales, where the scale is analytically defined by the G-value. A number of other properties including the geometric and morphological invariance of the TV-L1 model are also proved and their applications discussed.
When is Missing Data Recoverable?
Suppose a non-random portion of a data vector is missing. With some minimal prior knowledge about the data vector, can we recover the missing portion from the available one? In this paper, we consider a linear programming approach to this problem, present numerical evidence suggesting the effectiveness and limitation of this approach, and give deterministic conditions that guarantee a successful recovery. Our theoretical results, though related to recent results in compressive sensing, do not rely on randomization.
Multigrid Methods For Elliptic Optimal Control Problems
Patricia A. Howard
Linear-quadratic optimal control problems governed by elliptic partial differential equations arise in a variety of applications. The optimality conditions for these problems lead to large scale, symmetric indefinite linear systems of equations. For many applications these systems cannot be solved using direct numerical linear algebra techniques. Consequently, it is important to have efficient iterative methods for solving these optimality systems. This thesis studies multigrid methods for the solution of optimality systems arising in elliptic linear-quadratic optimal control problems. The formulation and application of multigrid methods are discussed. Their performance, both as an iterative method and as a preconditioner for GMRES, is investigated numerically. Several smoothing strategies within multigrid methods are studied for advection dominated problems.
Key Words: Multigrid, PDE Constrained Optimization, Optimal Control, Finite Element.
On the Parametrization of Ill-posed Inverse Problems Arising from Elliptic Partial Differential Equations
Fernando Guevara Vasquez
Electric impedance tomography (EIT) consists in finding the conductivity inside a body from electrical measurements taken at its surface. This is a severely ill-posed problem: any numerical inversion scheme requires some form of regularization. We present inversion schemes that address the instability of the problem by proper sparse parametrization of the unknown conductivity.
To guide us, we consider a consistent finite difference approach to an inverse Sturm-Liouville problem arising in EIT for layered media. The method first solves a model reduction problem for the differential equation where the reduced model parameters are essentially averages of the conductivity over the cells of a grid depending on the conductivity. Fortunately the dependence is weak. Thus one can efficiently estimate conductivity averages by using the grid for a reference conductivity. This simple inversion method converges to the true solution as the number of measurements increases. We analyze the sensitivity of the reduced model parameters to small changes in the conductivity, and introduce a Newton-type iteration to improve the reconstructions of the simple inversion method. As an added bonus, our method can benefit from a priori information if available.
We generalize both methods to the 2D EIT problem by considering finite volumes discretizations of size determined by the mesurement precision, but where the node locations are to be determined adaptively. This discretization can be viewed as a resistor network, where the resistors are essentially averages of the conductivity over grid cells. We show that the model reduction problem of finding the smallest resistor network (of fixed topology) that can predict meaningful measurements of the Dirichlet-to-Neumann map is uniquely solvable for a broad class of measurements. We propose a simple inversion method that, as in the simple method for the inverse Sturm-Liouville problem, is based on an interpretation of the resistors as conductivity averages over grid cells, and an iterative method that improves such reconstructions by using sensitivity information on the changes in the resistors due to small changes in the conductivity. A priori information can also be incorporated to the latter method.
The Arnoldi Eigenvalue Iteration with Exact Shifts Can Fail
The restarted Arnoldi algorithm, implemented in the ARPACK software library and MATLAB's eigs command, is among the most common means of computing select eigenvalues and eigenvectors of a large, sparse matrix. To assist convergence, a starting vector is repeatedly refined via the application of automatically-constructed polynomial filters whose roots are known as 'exact shifts'. Though Sorensen proved the success of this procedure under mild hypotheses for Hermitian matrices, a convergence proof for the non-Hermitian case has remained elusive. The present note describes a class of examples for which the algorithm fails in the strongest possible sense, that is, the polynomial filter used to restart the iteration deflates the eigenspace one is attempting to compute.
Key Words: Implicitly restarted Arnoldi algorithm, Krylov-Schur algorithm, eigenvalues, exact shifts, ARPACK, eigs.
Dose-Volume-Based IMRT Fluence Optimization: A Fast Least-Squares Approach With Differentiability
Yin Zhang and Michael Merritt
In intensity-modulated radiation therapy (IMRT) for cancer treatment, the most commonly used metric for treatment prescriptions and evaluations is the so-called dose volume constraint (DVC). These DVCs induce much needed flexibility but also non-convexity into the fluence optimization problem, which is an important step in the IMRT treatment planning. Currently, the models of choice for fluence optimization in clinical practice are weighted least-squares models. When DVCs are directly incorporated into the objective functions of least-squares models, these objective functions become not only non-convex but also non-differentiable. This non-differentiability makes it problematic that software packages designed for minimizing smooth functions are routinely applied to these non-smooth models in commercial IMRT planning systems. In this paper, we formulate and study a new least-squares model that allows a monotone and differentiable objective function. We devise a greedy approach for approximately solving the resulting optimization problem. We report numerical results on several clinical cases showing that, compared to a widely used existing model, the new approach is capable of generating clinically relevant plans at a much faster speed, with speedups above one-order of magnitude for some large-scale problems.
A Posteriori Error Estimates by Recovered Gradients in Parabolic Finite Element Equations
D. Leykekhman & L.B. Wahlbin
This paper considers a posteriori error estimates by averaged gradients in second order parabolic problems. Fully discrete schemes are treated. The theory from the elliptic case as to when such estimates are asymptotically exact, on an element, is carried over to the error on an element at a given time. The basic principle is that the time-step error needs to be smaller than the space-discretization error. Numerical illustrations are given.
Key Words: A posteriori, finite element, fully discrete, pointwise estimates, parabolic second order equation.
Parallel Solution of Large-Scale Free Surface Viscoelastic Flows Via Sparse Approximate Inverse Preconditioning
Z. Castillo, X. Xie, D.C. Sorensen, M. Embree, & M. Pasquali
Though computational techniques for two-dimensional viscoelastic free surface flows are well developed, three-dimensional flows continue to present significant computational challenges. Fully coupled free surface flow models lead to nonlinear systems whose steady states can be found via Newton's method. Each Newton iteration requires the solution of a large, sparse linear system, for which memory and computational demands suggest the application of an iterative method, rather than the sparse direct methods widely used for two dimensional simulations. The Jacobian matrix of this system is often ill-conditioned, resulting in unacceptably slow convergence of the linear solver; hence preconditioning is essential. In this paper we propose a variant sparse approximate inverse preconditioner for the Jacobian matrix that allows for the solution of problems involving more than a million degrees of freedom in challenging parameter regimes. Construction of this preconditioner requires the solution of small least squares problems that can be simply parallelized on a distributed memory machine. The performance and scalability of this preconditioner with the GMRES solver are investigated for two- and three-dimensional free surface flows on both structured and unstructured meshes in the presence and absence of viscoelasticity. The results suggest that this preconditioner is an extremely promising candidate for solving large-scale steady viscoelastic flows with free surfaces.
Key Words: Free surface, viscoelastic flow, parallel GMRES, SPAI, approximate inverse preconditioner.
Solution of Large-Scale Lyapunov Equations via the Block Modified Smith Methods
Balanced truncation is an attractive method for reducing the dimension of mediumscale dynamical systems. Research in recent years has brought approximate balanced truncation to the large-scale setting. At the heart of this technique are alternating direction implicit (ADI) methods for solving large Lyapunov and Sylvester equations. This work concerns the convergence of these methods. Our primary objective is the practical solution of very large Lyapunov equations. Uncertainty in the selection of shifts for the ADI method and its variants has prevented the widespread adoption of an otherwise promising variant, the block modified Smith method. We examine in detail the role of shift selection, the fundamental minimax problem, and the often startling influence of nonnormality. Our analysis is tied intimately to the decay rate of the singular values of the solutions to these equations. We improve upon past bounds and develop new ones, including bounds based on pseudospectra. In the end, we provide simple yet effective schemes for finding shifts that outperform those produced by conventional selection strategies. To make effective use of these shifts, we provide insights that allow the block modified Smith method to be applied to very large equations in a practical setting. First, we give an error bound in terms of the drop tolerance that substantially improves upon ad hoc choices that could fail or demand excessive memory. Next, we improve the residual calculation in the method, which can be a surprisingly expensive computation. Finally, we provide guidance on the use of complex shifts, when to update the singular value decomposition, and the number of shifts to use. We have succeeded in providing a practical implementation of the block modified Smith method. The success of the algorithm is demonstrated in numerous experiments, culminating in the rapid solution of some of the largest known Lyapunov equations attempted on a workstation.
Velocity Analysis in the Presence of Uncertainty
Eric Albert Dussaud
Velocity analysis resolves relatively long scales of earth structure, on the order of 1 km. Migration produces images with length scales (wavelengths) on the order of 10's of m. In between these two scale regimes lies another, corresponding roughly to structures between 60 to 300m in extent, in which the resolution of velocity analysis is uncertain and the energy of images is small to non-existent. This thesis aims at assessing the impact on velocity analysis of uncertainty at these intermediate length scales, using ideas on time reversal and imaging in randomly inhomogeneous media developed by Borcea and colleagues, in combination with velocity estimation methods of differential semblance type. The main result of this thesis is that the noise in seismic reflection data associated with the middle scales in velocity heterogeneity does not strongly affect the estimates of the long-scale component of velocity, if these estimates are obtained using a statistically stable formulation of differential semblance optimization. Hence the nonlinear influence of uncertainty in the middle scales does not propagate down the length scale. This is in contrast with the results of Borcea and colleagues, who have shown that prestack images are strongly affected, implying that the uncertainty in the middle scales does certainly propagate up the length scale. Random perturbations associated with the middle scales of velocity heterogeneity yield measurable phase shifts in the reflection data. However, it is known that cross-correlations of neighboring seismic traces are stable against these perturbations, under some circumstances. The main theoretical achievement of this thesis is to extend this stability result to laterally homogeneous background velocity models, and to cross-correlations containing slowly-varying weights. Chapter four shows that differential semblance functionals, specialized to layered modeling, can be written entirely in terms of weighted cross-correlations, and therefore argues that the velocity analysis algorithm and the associated velocity estimates inherit the statistical stability property. A quantitative verification of the stability claims is provided in Chapter five.
Case Studies For a First-Order Fobust Nonlinear Programming
Elaine Hale & Yin Zhang
In this paper, we conduct three case studies to assess the effectiveness of a recently proposed first-order method for robust nonlinear programming (Ref. 1). Three robust nonlinear programming problems were chosen from the literature using the criteria that results calculated using other methods must be available and the problems should be realistic, but fairly simple. Our studies show that the first-order method produced reasonable solutions when the level of uncertainty was small to moderate. In addition, we demonstrate a method for leveraging a theoretical result to eliminate constraint viola- tions. Since the first-order method is relatively inexpensive in comparison to other robust optimization techniques, our studies indicate that under moderate uncertainty the first-order approach may be more suitable than other methods for large problems.Key Words: Robust optimization, nonlinear programming, first-order robustness, design under un-certainty, safety constraints.
Fixed-Polynomial Approximate Spectral Transformations for Preconditioning the Eigenvalue Problem
Heidi Krista Thornquist
Arnoldi's method is often used to compute a few eigenvalues and eigenvectors of large, sparse matrices. When the eigenvalues of interest are not dominant or wellseparated, this method may suffer from slow convergence. Spectral transformations are a common acceleration technique that address this issue by introducing a modified eigenvalue problem that is easier to solve. This modified problem accentuates the eigenvalues of interest, but requires the solution of a linear system, which is computationally expensive for large-scale problems. Furthermore, ensuring the precision of the computed eigenvalues with respect to the original eigenvalue problem requires restrictive accuracy requirements on the solution of each linear system for the modified eigenvalue problem
This thesis shows how this expense can be reduced through a preconditioning scheme that uses a fixed-polynomial operator to approximate the spectral transformation. These operators are computed prior to being used in Arnoldi's method and eliminate the accuracy requirements on the solution of each linear system for the modified eigenvalue problem. Three different constructions for a fixed-polynomial operator are derived from some common iterative methods for non-Hermitian linear iii systems and the numerical behavior of these three operators are compared. There are significant computational benefits in precomputing an approximation to a spectral transformation, which become apparent in the discussion of implementation details as well as in the development of accuracy heuristics. Numerical experiments demonstrate that this preconditioning scheme is a competitive approach for solving largescale eigenvalue problems. The results illustrate the effectiveness of this technique using several practical eigenvalue problems from science and engineering ranging from hundreds to more than a million unknowns.
Simulating Nanoscale Functional Motions of Biomolecules
W. Wriggers, Z. Zhang, M. Shah, and D.C. Sorensen
We are describing efficient dynamics simulation methods for the characterization of functional motion of biomolecules on the nanometer scale. Multivariate statistical methods are widely used to extract and enhance functional collective motions from molecular dynamics (MD) simulations. A dimension reduction in MD is often realized through a principal component analysis or a singular value decomposition (SVD) of the trajectory. Normal mode analysis is a related collective coordinate space approach, which involves the decomposition of the motion into vibration modes based on an elastic model. Using the myosin motor protein as an example we describe a hybrid technique termed amplified collective motions that enhances sampling of conformational space through a combination of normal modes with atomic level MD. Unfortunately, the forced orthogonalization of modes in collective coordinate space leads to complex dependencies that are not necessarily consistent with the symmetry of biological macromolecules and assemblies. In many biological molecules, such as HIV-1 protease, reflective or rotational symmetries are present that are broken using standard orthogonal basis functions. We present a method to compute the plane of reflective symmetry or the axis of rotational symmetry from the trajectory frames. Moreover, we develop an SVD that best approximates the given trajectory while respecting the symmetry. Finally we describe a local feature analysis (LFA) to construct a topographic representation of functional dynamics in terms of local features. The LFA representations are low-dimensional, and provide a reduced basis set for collective motions, but unlike global collective modes they are sparsely distributed and spatially localized. This yields a more reliable assignment of essential dynamics modes across different MD time windows.Key Words: Principal Component Analysis, Normal Modes, Singular Value Decomposition, Symmetry Constraints, Local Feature Analysis, Molecular Dynamics.
A Sensitivity-Driven Greedy Approach to Fluence Map Optimization in Intensity-Modulated Radiation Therapy
Intensity-modulated radiation therapy (IMRT) is a state-of-the-art technique for administering radiation to cancer patients. The goal of a treatment is to maximize the radiation absorbed by the tumor and minimize that absorbed by the surrounding critical organs. Since a plan can almost never be found that both kills the tumor and completely avoids irradiating critical organs, the medical physics community has quantified the sacrifices that can be tolerated in so-called dose-volume constraints. Efficiently imposing such constraints, which are fundamentally combinatorial in nature, poses a major challenge due to the large amount of data. Also, the IMRT technology limits which dose distributions are actually deliverable. So, we seek a physically deliverable dose distribution that at the same time meets the minimum tumor dose prescription and satisfies the dose-volume constraints. We propose a new greedy algorithm and show that it converges to a local minimum of the stated formulation of the fluence map problem. Numerical comparison is made to an approach representative of the leading commercial software for IMRT planning.
We find our method produces plans of competitive quality with a notable improvement in computational performance. Our efficiency gain is most aptly attributed to a new interior-point gradient algorithm for solving the nonnegative least squares subproblem every iteration. Convergence is proven and numerical comparisons are made to other leading methods demonstrating this solver is well-suited for the subproblem.
Trust Region SQP Methods With Inexact Linear System Solves For Large-Scale Optimization
This thesis extends the design and the global convergence analysis of a class of trust-region sequential quadratic programming (SQP) algorithms for smooth nonlinear optimization to allow for an efficient integration of inexact linear system solvers.
Each iteration within an SQP method requires the solution of several linear systems, whose system matrix/operator involves the linearized constraints. Most existing implementations of SQP algorithms use direct linear algebra methods to solve these systems. For many optimization problems in science and engineering this is infeasible, because the systems are too large or the matrices associated with the linearized constraints are not formed explicitly. Instead, iterative solvers, such as preconditioned Krylov subspace methods have to be applied for the approximate solution of the linear systems arising within the SQP algorithm. In this case, the optimization algorithm has to provide stopping tolerances for the iterative solver.
The existing literature on the treatment of inexact linear system solves in SQP algorithms is rather scarce. Most theoretical results either provide stopping tolerances for iterative solvers that cannot be easily implemented in practice, or are restricted to specific classes of optimization problems. This thesis provides concrete stopping criteria for the iterative solution of so-called augmented systems, which allows for a wider applicability of the resulting SQP algorithm and a rigorous integration of available KKT preconditioners. A key contribution is the development of an inexact conjugate gradient algorithm for the solution of quadratic subproblems with linear constraints that are subject to arbitrary nonlinear perturbations that arise from the approximate computation of projections via Krylov subspace methods.
The resulting SQP algorithm dynamically adjusts stopping tolerances for iterative linear system solves based on its current progress toward a KKT point. The stopping tolerances can be easily implemented and efficiently computed, and are sufficient to guarantee first-order global convergence of the algorithm. The performance of the algorithm is examined on optimal control problems governed by Burgers and Navier-Stokes equations.
Time-Domain Decomposition Preconditioners for the Solution of Discretized Parabolic Optimal Control Problems
Optimal control problems governed by time-dependent partial differential equations (PDEs) lead to large-scale optimization problems. While a single PDE can be solved by marching forward in time, the optimality system for time-dependent PDE constrained optimization problems introduces a strong coupling in time of the governing PDE, the so-called adjoint PDE, which has to be solved backward in time, and the gradient equation. This coupling in time introduces huge storage requirements for solution algorithms. We study a time- domain decomposition based method that addresses the problem of storage and additionally introduces parallelism into the optimization algorithm. The method reformulates the original problem as an equivalent constrained optimization problem using ideas from multiple shooting methods for PDEs. For convex linear-quadratic problems, the optimality conditions of the reformulated problem lead to a linear system in state and adjoint variables at time-domain interfaces and in the original control variables. This linear system is solved using a preconditioned Krylov subspace method. Block Gauss-Seidel preconditioners have been introduced by Heinkenschloss (2000) for a suitable permutation of the optimality system. Unfortunately, the number of Krylov subspace iterations significantly increases when these preconditioners are parallelized. This problem has motivated the study of a preconditioner based on an approximate factorization of the optimality system, used by Biros, et. al. (1999) in another context. The factorization-based preconditioner requires approximate state and adjoint solves as well as a preconditioner for the so-called reduced Hessian. The approximate state and adjoint solves are derived from the parareal algorithm of Maday, et. al. (2001). New results on the spectrum of the reduced Hessian for a class of optimal control problems show that a simple 'scaling' preconditioner for the reduced Hessian suffices for this problem class. Our analysis of the spectrum of the optimality system preconditioned by the factorization-based preconditioner shows that we can replace the preconditioner for the reduced Hessian by a preconditioner for a less expensive Hessian-type operator. For problems of control in the initial condition, we apply this result to modify the multigrid preconditioner for the reduced Hessian introduced by Draganescu (2004) with no effects on the multigrid preconditioner performance.