Lab Home | Phone | Search | ||||||||
|
||||||||
Understanding how the brain processes and encodes sensory information is an outstanding scientific challenge that has received a great deal of attention around the world. In visual processing, for example, the brain seems to effortlessly perform tasks that are found to be extremely difficult for current computer algorithms and architectures to perform. In this work, we are interested in how neurons (called simple cells) in the primary visual cortex manage to efficiently code data and de-correlate image pixels as a first step towards visual cognition and object recognition. Our approach is to use machine learning techniques, including sparse coding and the expectation-maximization (EM) algorithm, to learn a set of basis functions (called a dictionary) for accurately representing natural images. We assume each basis function to have the form of a 2-D Gabor wavelet, a function empirically found to provide a good fit to the receptive fields of simple cells. The dictionary statistics is then fully specified by a joint probability distribution for the learned wavelet parameters. We propose minimal models for this joint distribution, and test the performance of "sampled dictionaries". This work generalizes the uniform parameter sampling approaches used in many wavelet-based applications. Host: Garrett Kenyon, gkenyon@lanl.gov, 7-1900, IS & T |