Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Thursday, August 01, 2019
12:00 PM - 1:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Tutorial on Understanding Error Analysis for Quasi-Monte Carlo Methods

Fred Hickernell
Illinois Institute of Technology

Monte Carlo methods approximate integrals by sample averages of integrand values. Quasi-Monte Carlo methods can provide accurate approximations to high dimensional integrals in terms of (weighted) sample means of the integrand values at well chosen sites. Such integrals arise in a variety of applications, including finance, statistical physics, sensitivity analysis, and Bayesian inference. The cubature error depends on properties of the integrand, and how well the sampling approximates the probability measure.

This tutorial highlights some of the major approaches to analyzing and reducing the cubature error that the conference presentations will develop in great detail. There are deterministic, randomized, and Bayesian ways of viewing cubature error, each making different assumptions about the integrand and the sampling measure. When the integrands lie in a Hilbert space or can be modeled as instances of a Gaussian process, the error analysis is particularly elegant.

Some strategies for increasing efficiency are described. We also show how the dimension of the integration domain may affect the error analysis. In some cases this leads to a curse of dimensionality, while in other cases the computational cost does not explode as the dimension becomes arbitrarily large. Finally, we discuss progress in developing data-based error bounds, which can be used to determine the sample size required to meet a desired error tolerance.

Slides from previous presentation of this talk are at http://mcqmc2016.stanford.edu/Hickernell-Fred.pdf

Host: James Hyman