Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Colloquia Archive 
 Postdoc Seminars Archive 
 Quantum Lunch 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 Kac Lectures 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Summer Research 
 Past Visitors 
 History of CNLS 
 Maps, Directions 
 CNLS Office 
Tuesday, January 22, 2008
11:00 AM - 12:00 PM
CNLS Conference Room (TA-3, Bldg 1690)


Convex Optimization Methods for Graphical Models

Jason K. Johnson

A graphical model is a compact representation of a multivariate probability distribution decomposed into potential functions on subsets of variables. This model is defined on a graph where nodes represent random variables and edges denote potentials. Such models provide a flexible approach to many problems in science and engineering, but also pose serious computational challenges. In this talk, I present convex optimization approaches to two central problems. First, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We address this problem by solving the maximum entropy relaxation (MER), which seeks the least informative (maximum entropy) model over an exponential family subject to constraints that small subsets of variables have marginal distribution close to the empirical distribution in relative entropy. We find that relaxing the marginal constraints is a form of information regularization that favors sparser graphical models. Two solution techniques are presented, one using the interior point method and another that is a relaxed form of the well-known iterative proportional fitting (IPF) procedure. Second, we consider the problem of determining the most probable configuration of all variables in a graphical model conditioned on a set of measured variables, also known as the maximum a posterior (MAP) estimate. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. We develop an iterative procedure to minimize the dual using deterministic annealing and an iterative marginal-matching procedure related to IPF. When strong duality holds, this leads to the optimal MAP estimate. Otherwise, we consider methods to enhance the dual formulation to reduce the duality gap and a heuristic to obtain approximate solutions when there is a duality gap. Joint work with Alan Willsky, Venkat handrasekaran (MER) and Dmitry Malioutov (LR).

Host: Misha Chertkov