Lab Home  Phone  Search  


A graphical model is a compact representation of a multivariate probability distribution decomposed into potential functions on subsets of variables. This model is defined on a graph where nodes represent random variables and edges denote potentials. Such models provide a flexible approach to many problems in science and engineering, but also pose serious computational challenges. In this talk, I present convex optimization approaches to two central problems. First, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We address this problem by solving the maximum entropy relaxation (MER), which seeks the least informative (maximum entropy) model over an exponential family subject to constraints that small subsets of variables have marginal distribution close to the empirical distribution in relative entropy. We find that relaxing the marginal constraints is a form of information regularization that favors sparser graphical models. Two solution techniques are presented, one using the interior point method and another that is a relaxed form of the wellknown iterative proportional fitting (IPF) procedure. Second, we consider the problem of determining the most probable configuration of all variables in a graphical model conditioned on a set of measured variables, also known as the maximum a posterior (MAP) estimate. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. We develop an iterative procedure to minimize the dual using deterministic annealing and an iterative marginalmatching procedure related to IPF. When strong duality holds, this leads to the optimal MAP estimate. Otherwise, we consider methods to enhance the dual formulation to reduce the duality gap and a heuristic to obtain approximate solutions when there is a duality gap. Joint work with Alan Willsky, Venkat handrasekaran (MER) and Dmitry Malioutov (LR). Host: Misha Chertkov 