The Rice Engineering Laboratory for Advanced Computational Science is a multidiscplinary center for the development of scalable algorithms and software tools for scientific simulation and engineering design.

We host the Intel Parallel Computing Center at Rice, devoted to the development and hardware optimization of scalable nonlinear solvers. We are a center of development for the PETSc libraries, and co-created the PyLith code for simulation of crustal deformation.

Sponsored Workshops

PETSc 2016

The PETSc 2016 conference was the second in a series of workshops for the community using the PETSc libraries from Argonne National Laboratory (ANL). The Portable Extensible Toolkit for Scientific Computation (PETSc) is an open-source software package designed for the solution of partial differential equations and used in diverse applications from surgery to environmental remediation. PETSc 3.0 is the most complete, flexible, powerful, and easy-to-use platform for solving the nonlinear algebraic equations arising from the discretization of partial differential equations. The PETSc 2015 meeting at ANL hosted over 100 participants for a three day exploration of algorithm and software development practice, and the 2016 meeting had over 90 registrants.

We were fortunate to have very generous sponsors for this meeting. Thirteen student travel awards, including several international trips, were sponsored by Intel (through the Rice IPCC Center), Google, and Tech-X, and the conference facilities were sponsored by the Vienna Scientific Cluster. Richard Mills of Intel presented a keynote lecture on the new Knights Landing (KNL) architecture for the Xeon Phi line of processors and a strategy for PETSc performance optimization.

Student Travel Grant Awardees

SIAM PP 2016 Minisymposium

Our minisymposium, To Thread or Not To Thread, attempted to clarifying the tradeoffs involved and give users and developers enough information to make informed decisions about programming models and library structures for emerging architectures, in particular the large, hybrid machines now being built by the Department of Energy. These will have smaller local memory/core, more cores/socket, multiple types of user- addressable memory, increased memory latencies (especially to global memory) and expensive synchronization. Two popular strategies, MPI+threads and flat MPI, need to be contrasted in terms of performance metrics, code complexity and maintenance burdens, and interoperability between libraries, applications, and operating systems.

Team Members

Prof. Matthew G. Knepley
Group Lead
Research Areas
Computational Science
Parallel Algorithms
Limited Memory Algorithms
Bioelectrostatic Modeling
Computational Geophysics
Computational Biophysics
Unstructured Meshes
Scientific Software
PETSc Development
Dr. Tobin Isaac
Postdoctoral Scholar
Research Areas Dr. Justin Chang
Postdoctoral Scholar
Research Areas
Computational Science
Parallel Algorithms
Computational Geophysics
Structured AMR
Scientific Software
p4est Development
PETSc Development
Computational Science
Parallel Algorithms
Subsurface Transport
Performance Modeling
Software Benchmarking
Scientific Software
Thomas Klotz
Graduate Student
Research Areas Maurice Fabien
Graduate Student
Research Areas
Computational Science
Bioelectrostatic Modeling
Computational Biophysics
High Precision Quadrature
PDE Simulation
High Performance Computing
Spectral Methods
Discontinuous Galerkin Methods
Multigrid Algorithms
Jeremy Tillay
Graduate Student
Research Areas Logan Smith
Graduate Student
Research Areas
Computational Science
Limited Memory Algorithms
Segmental Refinement Multigrid
Computational Analysis
Scalable PDE solvers
Mantle dynamics
PDE constrained optimization
Jonas Actor
Graduate Student
Research Areas Kirstie Haynie
Summer Graduate Student
Research Areas
Dimension Reduction
Function representation
Functional analysis
Geophysical modeling
Computational Geodynamics
Geophysical data assimilation

Collaborators

Eric Buras
Associate Research Engineer at Aptima, Inc.
Research Areas
Data Science
Graph Laplacian Inversion

Collaborators

Prof. Jaydeep P. Bardhan
Northeastern University
Dr. Barry F. Smith
Argonne National Laboratory
Dr. Mark Adams
Lawrence Berkeley National Laboratory
Prof. Jed Brown
University of Colorado Boulder
Dr. Dave A. May
ETH Zürich
Dr. Brad Aagaard
USGS, Menlo Park
Dr. Charles Williams
GNS Science, New Zealand
Prof. Boyce Griffith
University of North Carolina, Chapel Hill
Prof. Richard F. Katz
University of Oxford
Prof. Gerard Gorman
Imperial College London
Dr. Michael Lange
Imperial College London
Prof. Patrick E. Farrell
University of Oxford
Prof. Margarete Jadamec
University of Houston
Prof. L. Ridgway Scott
University of Chicago
Dr. Karl Rupp
Rupp, Ltd.
Prof. Louis Moresi
University of Melbourne
Dr. Lawrence Mitchell
Imperial College, London
Dr. Amir Molavi
Northeastern University
Dr. Nicolas Barral
Imperial College, London

Projects

The Rice Intel Parallel Computing Center (IPCC) focuses on the optimization of core operations for scalable solvers. Barry Smith has implemented a PETSc communication API that automatically segregates on-node and off-node traffic in order to avoid possible MPI overhead. Andy Terrel, Karl Rupp, and Matt Knepley have developed algoritms for vectorization of low-order FEM on low memory/core architectures such as Intel KNL and Nvidia GPUs. Jed Brown, Barry Smith, and Dave May have demonstrated exceptional performance for \(Q_2\) and higher order elements. Mark Adams, Toby Isaac, and Matt Knepley are curerntly optimizing Full Approximation Scheme (FAS) multigrid, applied to PIC methods as part of the SITAR effort.

The RELACS Team is focused on developing nonlinear solvers for complex, multiphysics simulations, using both Partial Differential Equations (PDE) and Boundary Integral Equations (BIE) in collaboration with Jay Bardhan. We are also developing solvers for Linear Complementarity Problems (LCP) and engineering design problems. We focus on using true multilevel formulations in the Full Approximation Scheme (FAS) multigrid formulation. Theoretically, we are interested in characterizing the convergence of Newton's Method over multiple meshes, as well as composed nonlinear iterations arising from Nonlinear Preconditioning (NPC).

The Solver Interfaces for Tree Adaptive Refinement (SITAR) project aims to develop scalable and robust nonlinear solvers for multiphysics problems discretized on large, parallel structured AMR meshes. Toby Isaac, a core developer of p4est, has integrated AMR functionality into PETSc. Toby and Matt Knepley have implemented Full Approximation Scheme (FAS) multigrid for nonlinear problems for the AMR setting. Mark Adams and Dave May are adding a Particle-in-Cell (PIC) infrastructure to SITAR, targeting MHD plasma and Earth mantle dynamics simulations.

With the goal of reducing the dimension of difficult PDE problems, we are developing a new representation of multivariate continuous functions in terms of compositions of univariate functions, based upon the classical Kolmogorov representation. We are currently pursuing stronger regularity guarantees on the univariate functions, and also efficient algorithms for manipulating the functions inside iterative methods.

With Prof. Jaydeep Bardhan, we have developed a nonlinear continuum molecular electrostatics model capable of quantiative accuracy for a number of phenomena that are flatly wrong in current models, including charge-hydration asymmetry and calculation of entropies. To illustrate the importance of treating both steric asymmetry and the static molecular potential separately, we have recently computed the charging free energies (and profiles) of individual atoms in small molecules. The static potential derived from MD simulations in TIP3P water is positive, meaning that for small positive charges, the charging free energy is actually positive (unfavorable). Traditional dielectric models are completely unable to reproduce such positive charging free energies, regardless of atomic charge or radius. However, our model reproduces these energies with quantitative accuracy, and we note that these represent unfavorable electrostatic contributions to solvation by hydrophobic groups. Traditional implicit-solvent models predicate that hydrophobic groups contribute nothing; for instance, nonpolar solvation models are often parameterized under the assumption that alkanes have electrostatic solvation free energies equal to zero. We have found this to be not the case, and because our model has few parameters that do not involve the atom radii, we have been able to parameterize a complete and consistent implicit-solvent model, that is, parameterizing both the electrostatic and nonpolar terms simultaneously. We found that this model is remarkably accurate, comparable to explicit solvent simulations for solvation free energies and water-octanol transfer free energies.

In collaboration with Richard Katz, we are working to scale solvers for Magma Dynamics simulations to large, parallel machines. The current formulation is an index 1 Differential-Algebraic Equation (DAE) in which the algebraic constraints are an elliptic PDE for the mass and momentum conservation, coupled to the porosity through permability and viscosities. We discretize this using continuous Galerkin FEM and our Full Approximation Scheme (FAS) solver has proven effective for the elliptic constraints. The advection of porosity is a purely hyperbolic equation coupled to the fluid velocity, which we discretize using FVM. The fully coupled nonlinear problem is solved at each explicit timestep.

Sponsors

News