Learning Population Codes by Minimizing Description
Length
Richard S. Zemel
Computational Neurobiology Laboratory
The Salk Institute
Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Abstract
The minimum description length (MDL) principle can be used to train
the hidden units of a neural network to extract a representation that is cheap to describe
but nonetheless allows the input to be reconstructed accurately. We show how MDL can be
used to develop highly redundant population codes. Each hidden unit has a location in a
low-dimensional implicit space. If the hidden unit activities form a bump of a standard
shape in this space, they can be cheaply encoded by the center of this bump. So the
weights from the input units to the hidden units in an autoencoder are trained to make the
activities form a standard bump. The coordinates of the hidden units in the implicit space
are also learned, thus allowing flexibility, as the network develops a discontinuous
topography when presented with different input classes.
Download [ps.gz] [pdf]
Neural Computation, 7:3, (1995) 549-564
[home page] [publications]