Lab Home | Phone | Search | ||||||||
|
||||||||
Modern machine learning applications are characterized by high degrees of complexity and uncertainty. Complexity is well handled by first-order logic, and uncertainty by probabilistic graphical models. Statistical relational learning seeks to combine the two. Markov logic networks (MLNs) do this by attaching weights to logical formulas and treating them as templates for features of Markov random fields. This talk will cover MLN representation, inference, learning and applications. MLN inference techniques are based on satisfiability testing, resolution, Markov chain Monte Carlo, and belief propagation. Learning techniques include pseudo-likelihood, voted perceptrons, second-order convex optimization, and inductive logic programming. MLNs have been applied in a wide variety of areas, including natural language processing, information extraction and integration, robot mapping, social networks, computational biology, and others. Open-source implementations of MLN algorithms are available in the Alchemy package (alchemy.cs.washington.edu).
(Joint work with Jesse Davis, Stanley Kok, Daniel Lowd, Aniruddh Nath, Hoifung Poon, Matt Richardson, Parag Singla, Marc Sumner, and Jue Wang.)
|