Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Colloquia Archive 
 Postdoc Seminars Archive 
 Quantum Lunch 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 Kac Lectures 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Summer Research 
 Student Application 
 Past Visitors 
 PD Travel Request 
 History of CNLS 
 Maps, Directions 
 CNLS Office 
Monday, May 15, 2017
3:00 PM - 4:00 PM
CNLS Conference Room (TA-3, Bldg 1690)


Towards AI and ML Methods for Automating Computational Science

Eric Mjolsness
University of California, Irvine

Can we develop and benefit from artificial intelligence (AI) systems for computational science? Computational science is already by definition automated, but for it to succeed, very knowledgeable people have to continually develop a lot of software. Could we gain substantial new power by partially automating that process, and thereby pursuing it at a higher level, using the great advances in machine learning (ML) and the modest but steady advances in formalization methods such as symbolic computer algebra, classical methods of artificial intelligence (eg. search and unification), and programming language semantics? Could petascale and exascale computing (perhaps particle methods in particular) be the right arena in which to do model parameter and structure searches in pursuit of such high-stakes automation? I will suggest principles by which this could happen, such as (1) model reduction across scales (which can be pursued by machine learning); (2) reflexive symbolic-algebraic modeling languages with mathematically defined semantics; (3) actionable knowledge in the form of semantics-preserving or -approximating model transformations with conditions of validity; and (4) large parameter and structure searches that can be pursued by multiscale methods. I will briefly introduce a mix of projects to instantiate these principles. Previous projects include: a symbolic process-modeling language based on attribute-bearing objects (particles), assemblages thereof (labelled graphs), and rules representing processes that convert or modify such objects, applied to eg. plant developmental biology; a model reduction method for this framework, applied to synapse dynamics; and specialized stochastic simulation and parameter estimation methods. More recent projects include: To model extended objects, and also to systematize deep learning model architectures, defining “hierarchitectures” comprised of exponentially growing graph lineages that support algebraic multigrid methods and that have a natural algebra of structural combinations; applying such hierarchitectures to parameter and structure searches; quantitative gene regulation network models (GRNs, first developed in collaboration with LANL) machine-trained as neural networks including structural inference by L1 regularization to prioritize parameters for use in subsequent reoptimization; and similar structural inference for prolongation/restriction maps in hierarchitectures.

Host: Aric Hagberg