Lab Home  Phone  Search  


Can we develop and benefit from artificial intelligence (AI) systems for computational science? Computational science is already by definition automated, but for it to succeed, very knowledgeable people have to continually develop a lot of software. Could we gain substantial new power by partially automating that process, and thereby pursuing it at a higher level, using the great advances in machine learning (ML) and the modest but steady advances in formalization methods such as symbolic computer algebra, classical methods of artificial intelligence (eg. search and unification), and programming language semantics? Could petascale and exascale computing (perhaps particle methods in particular) be the right arena in which to do model parameter and structure searches in pursuit of such highstakes automation? I will suggest principles by which this could happen, such as (1) model reduction across scales (which can be pursued by machine learning); (2) reflexive symbolicalgebraic modeling languages with mathematically defined semantics; (3) actionable knowledge in the form of semanticspreserving or approximating model transformations with conditions of validity; and (4) large parameter and structure searches that can be pursued by multiscale methods. I will briefly introduce a mix of projects to instantiate these principles. Previous projects include: a symbolic processmodeling language based on attributebearing objects (particles), assemblages thereof (labelled graphs), and rules representing processes that convert or modify such objects, applied to eg. plant developmental biology; a model reduction method for this framework, applied to synapse dynamics; and specialized stochastic simulation and parameter estimation methods. More recent projects include: To model extended objects, and also to systematize deep learning model architectures, defining “hierarchitectures” comprised of exponentially growing graph lineages that support algebraic multigrid methods and that have a natural algebra of structural combinations; applying such hierarchitectures to parameter and structure searches; quantitative gene regulation network models (GRNs, first developed in collaboration with LANL) machinetrained as neural networks including structural inference by L1 regularization to prioritize parameters for use in subsequent reoptimization; and similar structural inference for prolongation/restriction maps in hierarchitectures. Host: Aric Hagberg 