Lab Home | Phone | Search | ||||||||
|
||||||||
Markov Random Fields (MRFs), a.k.a. Graphical Models, serve as popular models for networks in the social and biological sciences, as well as communications and signal processing. A central problem is one of structure learning or model selection: given samples from the MRF, determine the graph structure of the underlying distribution. When the MRF is not Gaussian (like e.g. the Ising model) and contains cycles, structure learning is known to be NP hard even with infinite samples. Existing approaches typically focus on the sub-class of graphs with bounded degree; the complexity of many of these methods grows quickly in the degree bound. We develop a simple new algorithm for learning network structure. It learns the Markov neighborhood of a node by sequentially, greedily adding to it the node that produces the highest reduction in conditional entropy. We show exact structure recovery when either (a) the graph is a tree, or (b) the MRF is an Ising model (under conditions on the degree / girth / correlation decay) In a sense, our algorithm does for learning what Belief Propagation (BP) does for estimation: provide a very low-complexity local algorithm that is exact for trees and has surprisingly good performance for many graphs with loops. Joint work with : Siddhartha Banerjee, Sujay Sanghavi and Sanjay Shakkottai Host: Misha Chertkov |