Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Tuesday, August 15, 2017
2:00 PM - 3:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Deep Sparse Autoencoders for Invariant Multimodal Halle Berry Neurons

Edward Kim
Villanova University

In the past several decades, neuroscientists have been studying the response of the brain to sensory input and theorized that single neurons respond to individual concepts. In 2005, results on a study of epileptic patients demonstrated that some subjects had neurons that fired on a specific concept, and ignored other stimuli. For example, a woman had a neuron that fired when shown a picture of Jennifer Aniston, but not on other pictures of people, places, or things. Another patient had a neuron that fired when shown a picture of Halle Berry, as well as the text string "Halle Berry", demonstrating the invariance of a neuron to concepts and specific modalities. In our work, we sought to improve upon the standard feed-forward deep learning autoencoder by augmenting them with biologically inspired concepts of sparsity, top-down feedback, and lateral inhibition. While building and observing the behavior of our model, we were fascinated that multimodal, invariant neurons naturally emerged. Our experiments and results demonstrate the emergence of Halle Berry neurons, and we additionally show that our sparse representation of multimodal signals is qualitatively and quantitatively superior to the standard feed forward joint embedding in common vision and machine learning tasks.

Host: Garrett Kenyon, 505-667-1900, gkenyon@lanl.gov