Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Thursday, March 10, 2016
1:00 PM - 2:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Scaling Up Approximate Variational Learning

Nicholas Ruozzi
University of Texas at Dallas

CHANGE IN TIME: Neural networks have again risen to prominence, in large part due to hardware and engineering advances that have made previously challenging learning tasks solvable. Conversely, much of the work on using variational approximations from statistical physics for learning (i.e., the Bethe approximation) is too slow to be competitive for the types of large scale problems on which neural networks have made significant advances. In this talk, I will discuss a new approach to learning with variational methods based on the Frank-Wolfe algorithm. We have demonstrated that this approach is significantly faster for learning in conditional random fields, but the end goal is to use the same ideas to tackle large, latent variable models and to demonstrate that these techniques can compete with neural networks at scale.

Host: Misha Chertkov