Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, November 16, 2016
3:00 PM - 4:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Bayesian computation: Inverse problems, data assimilation, and machine learning

Kody Law
Oak Ridge National Laboratory

For half a century computational scientists have been numerically simulating complex systems. Uncertainty is recently becoming a requisite consideration in complex applications which have been classically treated deterministically. This has lead to an increasing interest in recent years in uncertainty quantification (UQ). Another recent trend is the explosion of available data. Bayesian inference provides a principled and well-defined approach to the in- tegration of data and UQ. Monte Carlo (MC) based computational methods, such as Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) samplers, require repeated evaluation of such expensive or intractable models, in principle over high-dimensional spaces and/or data sets. In this setting, standard algorithms quickly become intractable. Methods for identifying and exploiting concentration properties of the posterior with re- spect to the prior are becoming essential for solving these otherwise intractable Bayesian inference problems. Such methods may be referred to as likelihood-informed (LI). Addition- ally, the methods should scale well as the dimension of the underlying parameter space tends to infinity, for example as a result of refinement of an underlying mesh. Such methods may be referred to as dimension-independent (DI), in that their convergence does not degenerate as the mesh is infinitesimally resolved. The recently introduced DILI MCMC algorithms address these problems simultaneously. Another huge breakthrough in recent years is the extension of the multilevel Monte Carlo (MLMC) framework to some MC schemes for posterior exploration, so that approximation error can be optimally balanced with statistical sampling error, and ultimately the Bayesian inverse problem can be solved for the same asymptotic cost as solving the deterministic forward problem. The MLSMC sampler is a recent example of such method, for static inverse problems. MLMC data assimilation methods have also been developed, which com- bine dynamical systems with data in an online fashion. Examples are ML particle filters and ensemble Kalman filters. This talk will survey current and future work by the author on Bayesian computation for inverse problems and data assimilation. Particular emphasis will be on the devel- opment of DILI MCMC and multilevel Monte Carlo algorithms for inference, which are expected to become prevalent in the age of increasingly parallel emerging architecture, where resilience and reduced data movement will be crucial algorithmic considerations. Time permitting, work in progress on DILI-MLSMC samplers will be considered. Also, emerging connections with machine learning and the treatment of big data, possibly in the absence of complex models, will be considered.

Host: Nathan Urban