Method: mlp-mdl-3h

Minimum description length (mdl) based training of a multilayer perceptron (mlp) (feedforward neural network) with a single layer of hidden units. This version uses a fixed number (3) hidden units. The motivation for this learning is to control the amount of information in the weights of the network. Like conventional "weight decay" it codes the network weight under a prior Gaussian distribution, but takes into account not only the mean of the "noisy" weights but also their variance. The details of the procedures followed to generate the results in the archived are explained here while the background is given in [1].

Software

The source is available as a compressed tar file. However, be forewarned that this software was developed in conjuction with xerion 3.1, (not uts 4.0) an old neural net simulator developed at the University of Toronto. While the simulator is available free of charge, it may prove difficult to install.

Results

Directory listing of the results available for the mlp-mdl-3h method. Put the desired files in the appropriate methods directory in your delve hierarchy and uncompress them with using the "gunzip *.gz" command and untar them using "tar -xvf *.tar".

Related References

[1] G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, pages 5-13, 1993.
Last Updated 28 January 1998
Comments and questions to: delve@cs.toronto.edu
Copyright