Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Colloquia Archive 
 Postdoc Seminars Archive 
 Quantum Lunch 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 Kac Lectures 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Summer Research 
 Student Application 
 Past Visitors 
 PD Travel Request 
 History of CNLS 
 Maps, Directions 
 CNLS Office 
Thursday, May 10, 2012
1:00 PM - 2:00 PM
CNLS Conference Room (TA-3, Bldg 1690)


Sparse Coding for Prediction

Jonathan Yedidia
Disney Research, Inc.

A functioning module in a biological or artificial brain must process an ongoing stream of spatio-temporal inputs, and learn to predict those inputs. I formalize the problem of learning to make accurate predictions of future space-time inputs obtained from an unknown world, and then describe a system that can quickly solve this problem, at least for relatively simple synthetic worlds. The system is based on modules implementing sparse coding, where each module is endowed with a short-term memory which gives it access to a discretized space-time image consisting of spatial images for the present and a small number of previous “frames.” A module attempts to recreate an input space-time image as a weighted sum of a small number of non-negative basis vectors. A module first infers weights describing the contributions of existing basis vectors to a space-time image, using a matching pursuit algorithm. It then learns and adapts its set of basis vectors. The active basis vectors for a space-time image are projected part way towards the values they should have to reproduce the image. Additionally, basis vectors are recruited to reproduce missing parts of the image, and deleted if they are inactive. In this framework, prediction is easy. Once an accurate set of basis vectors has been learned, one can predict the future by shifting one frame into the future in a space-time image, inferring the basis vectors as usual using matching pursuit, and filling in the unknown next frame. The process can be iterated to predict many frames in the future. Prediction thus becomes erasure-correction of the missing future. I will show demos suggesting that such a system can very quickly learn and adapt to its environment, and predict its inputs, and might thus serve as a building block module for an artificial brain. I will also describe how these modules can be composed into networks, and speculate on the relation of the various ingredients in the system to their counterparts in biological brains.

Host: Misha Chertkov,, 665-8119