Lab Home | Phone | Search | ||||||||
|
||||||||
The design of high performance systems often uses a variety of computational tools to evaluate their performance. Often, these tools are of different fidelities, in that the tools vary in both the computational power needed to evaluate a particular design, and in the quality of the resulting performance estimate. The most accurate tools are often prohibitively expensive to evaluate many times, making many classical algorithms inappropriate. This talk explores methods for incorporating ideas from machine learning to build more effective algorithms. In particular, I will discuss explorations and extensions of Bayesian optimization in single- and multi-fidelity contexts, and I will discuss the use of machine-learned control variates to reduce the error in high-dimensional uncertainty quantification problems. Host: Aric Hagberg |