Lab Home | Phone | Search | ||||||||
|
||||||||
We use a combination of our Recursive Independent Components analysis (RICA) algorithm and sparse Principal components analysis (sPCA) to provide the first model that learns in an unsupervised fashion a model of the first four visual processing layers in the brain: Center surround cells in the retina and Lateral Geniculate Nucleus (LGN), simple cells in V1, complex cells in V1, and finally, receptive fields that accord with data concerning cells in V2. In most applications of the efficient coding theory, which states roughly that cells in the visual system act to reduce the redundancy in their inputs by learning features that are independent from one another, there is a step where PCA is applied. While PCA can be thought of as a neural network, this step (and the receptive fields that are learned) is usually not reported in detail. Recent work by Vincent et al. has shown that sparse PCA applied to natural images can learn the center surround receptive fields of retina and LGN cells, and that ICA on top of this still learns the edge detectors that have been seen as the result of these algorithms since Bell & Sejnowski and Olshausen & Field's pioneering work. Our contribution is to use sparse PCA in our hierarchical ICA model, and show that sparse PCA applied to the edge detectors gives the local pooling properties seen in complex cells in V1. FInally, ICA applied to the result of this gives cells resembling V2 cells in their receptive field properties. Host: Garrett Kenyon, garkenyon@gmail.com, 412-0416 |