Lab Home | Phone | Search | ||||||||
|
||||||||
Detecting communities, and labeling nodes, is a ubiquitous problem in the study of networks. Recently, we developed scalable Belief Propagation algorithms that update probability distributions of node labels until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a detectability transition in sparse networks, below which no algorithm can label nodes better than chance. I'll explain this transition, and give an accessible introduction to Belief Propagation and the analogy with free energy and the cavity method of statistical physics. We'll see that the consensus of many good solutions is a better labeling than the "best" solution --- something that is true for many real-world optimization problems. While many algorithms overfit, and find "communities" even in random graphs where none exist, our method lets us focus on statistically-significant communities. If time permits, I'll also explain some connections between Belief Propagation and a new spectral algorithm, based on the non-backtracking matrix, which succeeds all the way down to the detectability transition. This is joint work with Aurelien Decelle, Florent Krzakala, Elchanan Mossel, Joe Neeman, Mark Newman, Allan Sly, Lenka Zdeborova, and Pan Zhang. |