Lab Home | Phone | Search | ||||||||
|
||||||||
It is very likely that a computer for producing brainlike behavior must have brainlike architecture, but what in the architecture accounts for the brain's cognitive powers? One possible answer, suggested by the very size of the brain's circuits, is high-dimensional representation: computing with, say, 10,000-bit words rather than with 16-to-64-bit words. What would computing with such wide words be like? Neural-net associative memories (e.g., Willshaw, Hopfield, and my Sparse Distributed Memory) provide early examples. They are content-addressable and can work with incomplete and noisy data. High-dimensional vectors' tolerance for noise is well know in signal processing. Less well known is the possibility of combining several such vectors into a single vector of the same dimensionality and then computing with it while retaining the identity of the original vectors: they can be recovered from the result. This allows sequences and data structures to be represented in a single vector, thereby extending neural-net computing into the symbolic domain (the term Vector-Symbolic Architecture or VSA is sometimes used). The required operations form the core of a new kind of computing that is most naturally realized in nanotechnology. The best-known model of the kind, and perhaps the first, is Plate's Holographic Reduced Representation in the early 1990s. Research in the area is ongoing, but the field remains largely unexplored. Host: Garrett Kenyon, gkenyon@lanl.gov, 7-1900, IS & T |