**Local Error Estimates for SUPG Solutions of Advection-Dominated Elliptic Linear-Quadratic Optimal Control Problems**

M. Heinkenschloss & D. Leykekhman

*
M. Heinkenschloss
Department of Computational and Applied Mathematics
Rice University
*

*
D. Leykekhman
Department of Mathematics
University of Connecticut
*

We derive local error estimates for the discretization of optimal control problems governed by linear advection-diffusion partial differential equations (PDEs) using the streamline upwind/Petrov Galerkin (SUPG) stabilized finite element method. We show that if the SUPG method is used to solve optimization problems governed by an advection-dominated PDE the convergence properties of the SUPG method is substantially different from the convergence properties of the SUPG method applied for the solution of an advection-dominated PDE. The reason is that the solution of the optimal control problem involves another advection dominated PDE, the so-called adjoint equation, whose advection field is just the negative of the advection of the governing PDEs. For the solution of the optimal control problem, a coupled system involving both the original governing PDE as well as the adjoint PDE must be solved.

We show that in the presence of a boundary layer, the local error between the solution of the SUPG discretized optimal control problem and the solution of the infinite dimensional problem is of first order even if the error is computed locally in a region away from the boundary layer where the exact solution is smooth. We also prove optimal weighted error estimates. These imply optimal convergence rates for the local error in regions away from interior layers. Numerical examples are presented to illustrate some of the theoretical results.

__Key Words__: Optimal control, advection-diffusion equations, discretization, local error estimates, stabilized finite elements.

December 2008

**Analyzing Single-Molecule Manipulation Experiments**

Christopher P. Calderon, Nolan C. Harris, Ching-Hwa Kiang, & Dennis D. Cox

Single-molecule manipulation studies can provide quantitative information about the physical properties of complex biological molecules without ensemble artifacts obscuring the measurements. We demonstrate computational techniques which aim at more fully utilizing the wealth of information contained in noisy experimental time series. The "noise" comes from multiple sources, e.g. inherent thermal motion, instrument measurement error, etc. The primary focus of this article is a methodology for using time domain based methods for extracting the effective molecular friction from single-molecule pulling data. We studied molecules composed of 8 tandem repeat titin I27 domains, but the modeling approaches have applicability to other single-molecule mechanical studies. The merits and challenges associated with applying such a computational approach to existing single-molecule manipulation data are also discussed.

October 2008

**Filtering Random Layering Effects in Imaging**

L. Borcea, F. Gonzalez del Cueto, G. Papanicolaou, & C. Tsogka

Objects that are buried deep in heterogeneous media produce faint echoes which are difficult to distinguish from the backscattered field. Sensor array imaging in such media cannot work unless we filter out the backscattered echoes and enhance the coherent arrivals that carry information about the objects that we wish to image. We study such filters for imaging in strongly backscattering, finely layered media. The fine layering is unknown and we model it with random processes. The filters use ideas from common seismic imaging techniques, such as normal move-out and semblance velocity estimation. These methods are based on the single scattering approximation, so it is surprising that the filters can annihilate the incoherent echoes produced by random media. The goal of the paper is to study theoretically and numerically this phenomenon.

October 2008

**A Fast TVL1-L2 Minimization Algorithm for Signal Reconstruction from Partial Fourier Data**

Junfeng Yang, Yin Zhang, & Wotao Yin

Recent compressive sensing results show that it is possible to accurately reconstruct certain compressible signals from relatively few linear measurements via solving nonsmooth convex optimization problems. In this paper, we propose a simple and fast algorithm for signal reconstruction from partial Fourier data. The algorithm minimizes the sum of three terms corresponding to total variation, $\ell_1$-norm regularization and least squares data fitting. It uses an alternating minimization scheme in which the main computation involves shrinkage and fast Fourier transforms (FFTs), or alternatively discrete cosine transforms (DCTs) when available data are in the DCT domain. We analyze the convergence properties of this algorithm, and compare its numerical performance with two recently proposed algorithms. Our numerical simulations on recovering magnetic resonance images (MRI) indicate that the proposed algorithm is highly efficient, stable and robust.

October 2008

**Optical Flow Methods for the Registration of Compressible Flow Images and Images Containing Large Voxel Displacements or Artifacts**

Edward Castillo

Three optical flow image registration (IR) methods referred to as Combined Compressible Local Global (CCLG) optical flow, Large Displacements Optical Flow (LDOF), and Large Displacement Compressible Optical Flow (LDCOF) are introduced. The three novel methods are designed to account for difficulties raised by 4D throacic Computed Tomography (CT) image registration problem, which currently cannot be effectively addressed by existing methods.

The 4D CT image registration problem is more challenging than typical IR problems for three key reasons. First, voxel intensities for CT images are proportional to the density of the material imaged. Given that the density of lung tissue changes with respiration, the constant voxel intensity assumption employed by most IR methods is invalid for thoracic CT images. Second, due to the image acquisition procedure, 4D CT image sets are known to suffer from image noise, blurring, and artifacts. Finally, the large size of the image sets requires a computationally efficient and parallelizable algorithm.

The CCLG method models compressible image flow with the mass conservation equation coupled with a local-global strategy that alleviates the effects of image noise, and incorporates local image information into the voxel motion model. After a finite element discretization, the resulting large scale linear system is solved using a parallelizable, multi-grid preconditioned conjugate gradient algorithm.

The LDOF and LDCOF methods are designed for image sets containing large voxel displacements or erroneous image artifacts. Both methods incorporate unknown image information into the IR problem formulation, which results in a nonlinear least squares problem for both the pixel displacement components and the unknown image values. An alternating linear least squares algorithm is introduced for solving the LDOF and LDCOF nonlinear least squares problems efficiently.

After Chapter 1 introduces the basics of IR, the main body of the thesis is divided into two parts. Part 1 is a review of existing IR methodologies. Part 2 derives the three aforementioned new approaches, and presents the results of numerical experiments testing each of the three methods. The computational experiments are carried out on both synthetic and genuine image data. Finally, the thesis concludes in Chapter 8 with a discussion on areas of future research.

October 2008

**Extracting Kinetic and Stationary Distribution Information from Short MD Trajectories via a Collection of Surrogate Diffusion Models**

Christopher P. Calderon & Karunesh Arora

Low-dimensional stochastic models can summarize dynamical information and make long time predictions associated with observables of complex atomistic systems. Maximum likelihood based techniques for estimating low-dimensional surrogate diffusion models from relatively short time series are presented. It is found that a heterogeneous population of slowly evolving conformational degrees of freedom modulates the dynamics. This underlying heterogeneity results in a collection of estimated low-dimensional diffusion models. Numerical techniques for exploiting this finding to approximate skewed histograms associated with the simulation are presented. In addition, statistical tests are also used to assess the validity of the models and determine physically relevant sampling information, e.g. the maximum sampling frequency at which one can discretely sample from an atomistic time series and have a surrogate diffusion model pass goodness-of-fit tests. The information extracted from such analyses can possibly be used to assist umbrella sampling computations as well as help in approximating effective diffusion coefficients. The techniques are demonstrated on simulations of Adenylate Kinase.

October 2008

**A Numerical Study of Fixed-Point Continuation Applied to Compressed Sensing**

Elaine T. Hale, Wotao Yin, & Yin Zhang

Fixed-point continuation (FPC) is an operator-splitting and continuation based approach for solving minimization problems with L_1-regularization:

min ||x||_1 + mu f(x)

We investigate the application of this algorithm to compressed sensing signal recovery, in which f(x) = (1/2)||Ax-b||_{M}^{2}, where A is m by n and m < n. In particular, we extend the original algorithm to obtain better practical results, derive appropriate choices for **M** and mu under a given measurement model, and present numerical results for a variety of compressed sensing problems. The numerical results show that the performance of our algorithm compares favorably with that of several recently proposed algorithms.

__Key Words__: L_1-regularization, fixed-point algorithm, continuation, compressed sensing, numerical experiments.

October 2008

**Using Stochastic Models Calibrated from Nanosecond Non-Equilibrium Simulations to Approximate Mesoscale Information**

Christopher P. Calderon, Lorant Janosi, & Ioan Kosztin

We demonstrate how the surrogate process approximation (SPA) method can be used to compute both the potential of mean force (PMF) along a reaction coordinate and the associated diffusion coefficient using a relatively small (10-20) set of bidirectional non-equilibrium trajectories coming from a complex system. Our method provides confidence bands which take the variability of the initial condition, continuous nature of the work paths, and thermal fluctuations into account. Maximum likelihood type methods are used to estimate a stochastic differential equation (SDE) approximating the dynamics and assist in these computations. For each observed time series, we estimate a new SDE resulting in a collection of SPA models. The physical significance of the collection of SPA models is discussed and methods for exploiting information in this population of models are suggested. Molecular dynamics(MD) simulations of potassium ion dynamics inside a gramicidin A channel are used to demonstrate the methodology, although SPA type modeling has also proven useful in analyzing single-molecule experimental time series [J. Phys. Chem. B, **113**, 118 (2009)].

October 2008

**Interface Error Analysis for Numerical Wave Propagation**

William W. Symes & Tetyana Vdovina

The numerical error associated with finite-difference simulation of wave propagation in discontinuous media consists of two components. The first component is a higher order error that leads to grid dispersion; it can be controlled by higher-order methods. The second component results from misalignment between numerical grids and material interfaces. We provide an explicit estimate of the interface misalignment error for the second order in time and space staggered finite-difference scheme applied to the acoustic wave equation. Our analysis, confirmed by numerical experiments, demonstrates that the interface error results in a first-order time shift proportional to the distance between the interface and computational grids. A two-dimensional experiment shows that the interface error cannot be suppressed by higher-order methods and indicates that our one-dimensional analysis gives a good prediction about the behavior of the numerical solution in higher dimensions.

October 2008

**The Acoustic Radiation Solution**

William W. Symes & Tetyana Vdovina

The well-known radiation solution of the acoustic wave equation may also be viewed as the pressure field in the solution of the first-order system of linear acoustics, in two different ways. The first version casts in the source term as a defect in the acoustic constitutive law, the second presents it as an equivalent body source. The second form requires the addition of a parasitic stationary singular pressure field.

October 2008

**Well-posedness of Initial/Boundary Value Problems for Hyperbolic Integro-differential Systems with Nonsmooth Coefficients**

Kirk D. Blazek, Christiaan Stolk, & William W. Symes

In the late 1960's, J.-L. Lions and collaborators showed that energy estimates could be used to establish existence, uniqueness, and continuous dependence on initial data for finite energy solutions of initial/boundary value problems for various linear partial differential evolution equations with nonsmooth coefficients. The second author has recently treated second order hyperbolic systems, for example linear elastodynamics, by similar methods, and extended these techniques to demonstrate continuous dependence and even differentiability (in a suitable sense) of the solution as function of the coefficients. In the present paper, we extend Lions' results in a different direction, to first order symmetric hyperbolic integrodifferential systems (such as linear viscoelasticity) with bounded and measurable coefficients. We show that the initial value problem is well-posed in an appropriate space of finite-energy weak solutions, and that solutions of this class are continuous as functions of the coefficients and data. This result is sharp, in the sense that solutions are not in general locally uniformly continuous in coefficients and data. Solutions are however (Gâteaux-)differentiable as a function of the coefficients for fixed data, in case the data is smooth enough that the time derivative 1 of the solution is itself a finite-energy weak solution. The continuity result combines with the well- known domain of influence properties for hyperbolic systems with smooth coefficients to show that viscoelasticity with bounded, measureable coefficients predicts finite wave propagation speed.

October 2008

**Quantifying DNA Melting Transitions Using Single-Molecule Force Spectroscopy**

Christopher P. Calderon, Wei-Hung Chen, Kuan-Jiuh Lin, Nolan C. Harris, & Ching-Hwa Kiang

We stretched a DNA molecule using atomic force microscope and quantified the mechanical properties associated with ** B** and

September 2008

**Quantifying Multiscale Noise Sources in Single-Molecule Time Series**

Christopher P. Calderon, Nolan C. Harris, Ching-Hwa Kiang, & Dennis D. Cox

When analyzing single-molecule data, a low-dimensional set of system observables typically serve as the observational data. We calibrate stochastic dynamical models from time series that record such observables. Numerical techniques for quantifying noise from multiple time-scales in a single trajectory, including experimental instrument and inherent thermal noise, are demonstrated. The techniques are applied to study time series coming from both simulations and experiments associated with the nonequilibrium mechanical unfolding of titin’s I27 domain. The estimated models can be used for several purposes: (1) detect dynamical signatures of "rare events" by analyzing the effective diffusion and force as a function of the monitored observable; (2) quantify the influence that conformational degrees of freedom, which are typically difficult to directly monitor experimentally, have on the dynamics of the monitored observable; (3) quantitatively compare the inherent thermal noise to other noise sources, e.g. instrument noise, variation induced by conformational heterogeneity, etc.; (4) simulate random quantities associated with repeated experiments; (5) apply pathwise, i.e. trajectory-wise, hypothesis tests to assess the goodness-of-fit of the models and even detect conformational transitions in noisy signals. These items are all illustrated with several examples.

September 2008

**Numerical Methods for Modeling Atomistic Trajectories with Diffusion SDEs**

Christopher P. Calderon, Josue G. Martinez, Raymond J. Carroll, & Danny C. Sorensen

The stochastic dynamics of small scale systems are often not known from *a priori* physical considerations. We present data-driven numerical methods which can be used to approximate the nonlinear stochastic dynamics associated with time series of system observables. Given a single time series coming from a simulation or experiment, our approach uses maximum likelihood type estimates to obtain a sequence of local stochastic differential equations. The local models coming from one times series are then patched together using a penalized spline procedure. We provide an effcient algorithm for achieving this which utilizes estimates of the local parameter covariance. We also use goodness-of-fit tests to quantitatively determine when an overdamped Langevin approxi- mation can be used to describe the data. For situations where the overdamped approximation fails, we show that other diffusive models can still be used to approximate the dynamics. In addition, we also briefly discuss how variation observed in different curves, calibrated from different time series, can provide information about "hidden" conformational degrees of freedom not explicitly included in the model and how clustering these curves can help one in learning about the effective underlying free energy surface governing the dynamics of the atomistic system. The methods presented are applied to simulations modeling forced time-dependent transport of potassium transport through a gramicidin **A** channel, but have applicability to other forced (and unforced) systems.

August 2008

**An Efficient Algorithm for Compressed MR Imaging using Total Variation and Wavelets**

Shiqian Ma, Wotao Yin, Yin Zhang, & Amit Chakraborty

Compressed sensing, an emerging multidisciplinary field involving mathematics, probability, optimization, and signal processing, focuses on reconstructing an unknown signal from a very limited number of samples. Because information such as boundaries of organs is very sparse in most MR images, compressed sensing makes it possible to reconstruct the same MR image from a very limited set of measurements significantly reducing the MRI scan duration. In order to do that however, one has to solve the difficult problem of minimizing nonsmooth functions on large data sets. To handle this, we propose an efficient algorithm that jointly minimizes the *L*1 norm, total variation, and a least squares measure, one of the most powerful models for compressive MR imaging. Our algorithm is based upon an iterative operator-splitting framework. The calculations are accelerated by continuation and takes advantage of fast wavelet and Fourier transforms enabling our code to process MR images from actual real life applications. We show that faithful MR images can be reconstructed from a subset that represents a mere 20 percent of the complete set of measurements.

August 2008

**Multi-scale Behavior in Chemical Reaction Systems: Modeling, Applications, and Results**

Jesse Hosea Turner III

Four major approaches model the time dependent behavior of chemical reaction systems: ordinary differential equations (ODE's), the tau-leap algorithm, stochastic differential equations (SDE's), and Gillespie's stochastic simulation algorithm (SSA). ODE's are simulated the most quickly of these, but are often inaccurate for systems with slow rates and molecular species present in small numbers. Under ideal conditions, the SSA is exact, but computationally inefficient. Unfortunately, many reaction systems exhibit characteristics not well captured individually by any of these methods. Therefore, hybrid models incorporating aspects from all four must be employed. The aim is to construct an approach that is close in accuracy to the SSA, useful for a wide range of reaction system examples, and computationally efficient.

The Adaptive Multi-scale Simulation Algorithm (AMSA) uses the SSA for slow reactions, SDE's for medium-speed reactions, ODE's for fast reactions, and the tau-leap algorithm for non-slow reactions involving species small in number. This article introduces AMSA and applies it to examples of reaction systems involving genetic regulation. A thorough review of existing reaction simulation algorithms is included. The computational performance and accuracy of AMSA's molecular distributions are compared to those of the SSA, which is used as the golden standard of accuracy.

The use of supercomputers can generate much larger data sets than serial processors in roughly the same amount of computational time. Therefore, multi-processor machines are also employed to assess the accuracy of AMSA simulations.

August 2008

**Program Analysis and Transformation in Mathematical Programming**

Joseph G. Young

Over the years, mathematical models have become increasingly complex. Rarely can we accurately model a process using only linear or quadratic functions. Instead, we must employ complicated routines written in some programming language. At the same time, most algorithms rely on the ability to exploit structural features within a model. Thus, our ability to compute with a model directly relates to our ability to analyze it.

Mathematical programs exemplify these difficult modeling issues. Our desire to accurately model a process is mediated by our ability to solve the resulting problem. Nonetheless, many problems contain hidden structural features that, when identified, allow us to transform the problem into a more computable form. Thus, we must develop methods that not only recognize these hidden features, but exploit them by transforming one problem formulation into another.

We present a new domain specific language for mathematical programming. The goal of this language is to develop a system of techniques that allow us to automatically determine the structure of a problem then transform it into a more desirable form. Our technical contribution to this area includes the grammar, type system, and semantics of such a language. Then, we use these tools to develop a series of transformations that manipulate the mathematical model.

August 2008

**Independence Systems and Stable Set Relaxations**

Benjamin McClosky

Many fundamental combinatorial optimization problems involve the search for subsets of graph elements which satisfy some notion of independence. This thesis develops techniques for optimizing over a class of independence systems and focuses on systems having the vertex set of a finite graph as a ground set. The search for maximum stable sets in a graph offers a well-studied example of such a problem. More generally, for any integer k greater than or equal to one, the maximum co-k-plex problem fits into this framework as well. Co-k-plexes are defined as a relaxation of stable sets.

This thesis studies co-k-plexes from polyhedral, algorithmic, and enumerative perspectives. The polyhedral analysis explores the relationship between the stable set polytope and co-k-plex polyhedra. Results include generalizations of odd holes, webs, wheels, and the claw. Sufficient conditions for the integrality of some related linear systems and results on the composition of stable set polyhedra are also given. The algorithmic analysis involves the development of heuristic and exact algorithms for finding maximum k-plexes. This problem is closely related to the search for co-kplexes. The final chapter includes results on the enumerative structure of co-k-plexes in certain graphs.

August 2008

**An Efficient TVL1 Algorithm for Deblurring Multichannel Images Corrupted by Impulsive Noise**

J. Yang, Y. Zhang, and W. Yin

We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1-norm. We derive the algorithm by applying the well-known quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global q-linear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a low-complexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the e±ciency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.

August 2008

**On Theory of Compressive Sensing via L_1-Minimization: Simple Derivations and Extensions**

Yin Zhang

Compressive (or compressed) sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quantifies the central fact that a sparse signal of length n can be exactly recovered from much less than n measurements via L_1-minimization or other recovery techniques, while the latter specifies how stable is a recovery technique in the presence of measurement errors and inexact sparsity. So far, most analyses in CS rely heavily on a matrix property called Restricted Isometry Property (RIP). In this paper, we present an alternative, non-RIP analysis for CS via L_1-minimization. Our purpose is three-fold: (a) to introduce an elementary treatment of the CS theory free of RIP and easily accessible to a broad audience; (b) to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via L_1-minimization; and (c) to substantiate a property called uniform recoverability of L_1-minimization; that is, for almost all random measurement matrices recoverability is asymptotically identical. With the aid of two classic results, the non-RIP approach enables us to derive from scratch all basic results for the extended theory with short and simple derivations.

__Key Words__: Compressive Sensing, L_1-minimization, non-RIP analysis, recoverability and stability, prior information, uniform recoverability.

July 2008

**One Can Hear the Composition of a String: Experiments with an Inverse Eigenvalue Problem***

Steven J. Cox, Mark Embree, & Jeffrey M. Hokanson

To what extent do the vibrations of a mechanical system reveal its composition? Despite innumerable applications and mathematical elegance, this question often slips through those cracks that separate courses in mechanics, differential equations, and linear algebra. We address this omission by detailing a classical nite dimensional example: the use of frequencies of vibration to recover positions and masses of beads vibrating on a string. First we derive the equations of motion, then compare the eigenvalues of the resulting linearized model against vibration data measured from our laboratory's monochord. More challenging is the recovery of masses and positions of the beads from spectral data, a problem elegantly solved, through application of continued fractions, by Mark Krein. After presenting Krein's algorithm in a manner suitable for advanced undergraduates, we confirm its effcacy through physical experiment. We encourage readers to conduct their own explorations using data sets we provide on the web.

__Key Words__: Beaded string, inverse eigenvalue problem, vibration, continued fractions.

Additional data sets are available at:

www.caam.rice.edu/~beads.

July 2008

**A Fast Algorithm for Edge-Preserving Variational Multichannel Image Restoration**

Junfeng Yang, Wotao Yin, Yin Zhang, & Yilun Wang

We generalize the alternating minimization algorithm recently proposed in [32] to effciently solve a general, edge-preserving, variational model for recovering multichannel images degraded by within- and cross-channel blurs, as well as additive Gaussian noise. This general model allows the use of localized weights and higher-order derivatives in regularization, and includes a multichannel extension of total variation (MTV) regularization as a special case. In the MTV case, we show that the model can be derived from an extended half-quadratic transform of Geman and Yang [14]. For color images with three channels and when applied to the MTV model (either locally weighted or not), the per-iteration computational complexity of this algorithm is dominated by nine fast Fourier transforms. We establish strong convergence results for the algorithm including finite convergence for some variables and fast *q*-linear convergence for the others. Numerical results on various types of blurs are presented to demonstrate the performance of our algorithm compared to that of the MATLAB deblurring functions. We also present experimental results on regularization models using weighted MTV and higher-order derivatives to demonstrate improvements in image quality provided by these models over the plain MTV model.

__Key Words__: Half-quadratic, cross-channel, image deblurring, isotropic total variation, fast Fourier transform.

July 2008

**A Fast Algorithm for Large Scale l_{1}-Regularized Logistic Regression**

Jianing Shi, Wotao Yin, Stanley Osher, & Paul Sajda

*l*_{1}-regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of *l*_{1}-regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. When a sparse logistic regression problem has large-scale data in high dimensions, it is computationally expensive to minimize the non-differentiable *l*_{1}-norm in the objective function. Motivated by recent work (Hale et al., 2008; Koh et al., 2007), we propose a novel hybrid algorithm based on combining two types of optimization iterations: one being very fast while the other being slower but more accurate. Called hybrid iterative shrinkage (HIS), the resulting algorithm is based completely on memory efficient operations such as matrix-vector multiplications. Furthermore, we show various optimization techniques including line search and continuation can significantly accelerate convergence. The algorithm has global convergence at a geometric rate (a Q-linear rate in optimization terminology). We present a numerical comparison with several existing algorithms, using benchmark data from the UCI machine learning repository, and show our algorithm is the most computationally efficient without loss of accuracy.

__Key Words__: Logistic regression, *l*_{1} regularization, fixed point continuation, supervised learning, large scale.

July 2008

**Fast Linearized Bregman Iteration for Compressive Sensing and Sparse Denoising**

Stanley Osher, Yu Mao, Bin Dong, & Wotao Yin

We propose and analyze an extremely fast, e±cient and simple method for solving the problem:

min { ||*x*||_{1} : *Au* = *f* }

This method was first described in [1], with more details in [2] and rigorous theory given in [3]. The motivation was compressive sensing, which now has a vast and exciting history, which seems to have started with Candes, et.al. [4] and Donoho, [5]. See [2], [3] for a large set of references. Our method introduces an improvement called "kicking" of the very efficient method of [1], [2] and also applies it to the problem of denoising of undersampled signals. The use of Bregman iteration for denoising of images began in [6] and led to improved results for total variation based methods. Here we apply it to denoise signals, especially essentially sparse signals, which might even be undersampled.

June 2008

**An Efficient Class of Direct Search Surrogate Methods for Solving Expensive Optimization Problems with CPU-Time-Related Functions**

Mark A. Abramson, Thomas J. Asaki, J. E. Dennis, Jr., Raymond Magallanez, & Matthew J. Sottile

In this paper, we characterize a new class of computationally expensive optimization problems and introduce an approach for solving them. In this class of problems, objective function values may be directly related to the computational time required to obtain them, so that, as the optimal solution is approached, the computational time required to evaluate the objective is significantly less than at points farther away from the solution. This is motivated by an application in which each objective function evaluation requires both a numerical fluid dynamics simulation and an image registration process, and the goal is to find the parameter values of a predetermined reference image by comparing the flow dynamics from the numerical simulation and the reference image through the image comparison process. In designing an approach to numerically solve the more general class of problems in an efficient way, we make use of surrogates based on CPU times of previously evaluated points, rather than their function values, all within the search step framework of mesh adaptive direct search algorithms. Because of the expected positive correlation between function values and their CPU times, a time cutoff parameter is added to the objective function evaluation to allow its termination during the comparison process if the computational time exceeds a specified threshold. The approach was tested using the NOMADm and DACE MATLAB software packages, and results are presented.

June 2008 (Revised April 2011)

**Numerical Solution of Implicitly Constrained Optimization Problems**

Matthias Heinkenschloss

Many applications require the minimization of a smooth function *f: R ^{n} → R * whose evaluation requires the solution of a system of nonlinear equations. This system represents a numerical simulation that must be run to evaluate

In theory *f* can be minimized using the steepest descent method or Newton-type methods for unconstrained minimization. However, for the practical application of derivative based methods for the minimization of *f* one has to deal with many interesting issues that arise out of the presence of the system of nonlinear equations that must be solved to evaluate *f*.

This article studies some of these issues, ranging from sensitivity and adjoint techniques for derivative computation to implementation issues in Newton-type methods. A discretized optimal control problem governed by the unsteady Burgers equation is used to illustrate the ideas.

__Key Words__: Unconstrained minimization, implicit constraints, adjoints, sensitivities, Newton method, nonlinear programming, optimal control, Burgers equation.

May 2008

**A Quadratically Constrained Minimization Problem Arising from PDE of Monge-Ampére Type**

D.C. Sorensen & Roland Glowinski

This note develops theory and a solution technique for a quadratically constrained eigenvalue minimization problem. This class of problems arises in the numerical solution of fully-nonlinear boundary value problems of Monge-Ampére type. Though it is most important in the three dimensional case, the solution method is directly applicable to systems of arbitrary dimension.

The focus here is on solving the minimization subproblem which is part of a method to numerically solve a Monge-Ampére type equation. These subproblems must be evaluated many times in this numerical solution technique and thus efficiency is of utmost importance.

A novelty of this minimization algorithm is that it is finite of complexity O(N^3) with the exception of solving a very simple rational function of one variable. This function is essentially the same for any dimension. This result is quite surprising given the nature of the minimization problem.

May 2008

**ORTHOMADS: A Deterministic MADS Instance with Orthogonal Directions**

Mark A. Abramson, Charles Audet, J.E. Dennis, Jr. & Sébastien Le Digabel

The purpose of this paper is to introduce a new way of choosing directions for the Mesh Adaptive Direct Search (MADS) class of algorithms. The advantages of this new OrthoMADS instantiation of MADS are that the polling directions are chosen deterministically, ensuring that the results of a given run are repeatable, and that they are orthogonal to each other, therefore the convex cones of missed directions at each iteration are minimal in size.

The convergence results for OrthoMADS follow directly from those already published for MADS, and they hold deterministically, rather than with probability one, as for LTMADS, the first MADS instance. The initial numerical results are quite good for both smooth and nonsmooth, and constrained and unconstrained problems considered here.

__Key Words__: MMesh Adaptive Direct Search algorithms (MADS), deterministic, orthogonal directions, constrained optimization, nonlinear programming.

February 2008

**A Curvilinear Search Method for p-Harmonic Flows on Spheres**

Donald Goldfarb, Zaiwen Wen, & Wotao Yin

The problem of finding p-harmonic flows arises in a wide range of applications including micromagnetics, liquid crystal theory, directional diffusion, and chromaticity denoising. In this paper, we propose an innovative curvilinear search method for minimizing p-harmonic energies over spheres. Starting from a flow (map) on the unit sphere, our method searches along a curve that lies on the sphere in a manner similar to a standard inexact line search descent method. We show that our method is globally convergent if the step length satisfies the Armijo-Wolfe conditions. Computational tests are presented to demonstrate the efficiency of the proposed method and a variant of it that uses Barzilai-Borwein steps.

__Key Words__: Energy minimization, p-harmonic maps, p-harmonic flows, finite difference, curvi-linear search, global convergence, chromaticity denoising.

January 2008

**Iterative Reweighted Algorithms for Compressive Sensing**

Rick Chartrand & Wotao Yin

The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using *l*^{p} minimization with *p*<1 can do so with fewer measurements than with *p*=1. In this paper we consider the use of iteratively reweighted algorithms for computing local minima of the nonconvex problem. In particular, a particular regularization strategy is found to greatly improve the ability of a reweighted least-squares algorithm to recover sparse signals, with exact recovery being observed for signals that are much less sparse than required by an unregularized version (such as FOCUSS, [2]). Improvements are also observed for the reweighted-*l*^{1} approach of [3].

__Key Words__: Compressive sensing, signal reconstruction, nonconvex optimization, iteratively reweighted least squares, *l*^{1} minimization.

January 2008