Lab Home | Phone | Search | ||||||||
|
||||||||
CHANGE IN TIME: Neural networks have again risen to prominence, in large part due to hardware and engineering advances that have made previously challenging learning tasks solvable. Conversely, much of the work on using variational approximations from statistical physics for learning (i.e., the Bethe approximation) is too slow to be competitive for the types of large scale problems on which neural networks have made significant advances. In this talk, I will discuss a new approach to learning with variational methods based on the Frank-Wolfe algorithm. We have demonstrated that this approach is significantly faster for learning in conditional random fields, but the end goal is to use the same ideas to tackle large, latent variable models and to demonstrate that these techniques can compete with neural networks at scale. Host: Misha Chertkov |