Masters Thesis [ps][pdf]
Experimental Results: Samples from the models
A sample from the training set: *
Model | greedy 0 | greedy 1 | greedy 2 | fine tuned |
(1) TRBM | * | * | * | * |
(2) TRBM-HH | * | * | * | * |
(3) TRBM-VV-VH | * | * | * | * |
(4) WRBM | * | * | * | * |
(5) Multigrid-WRBM | * | * | * | * |
(6) The mixed model | * | * | * | * |
Where greedy i corresponds to layer-by-layer training of layer i
sequentially, and fine tuned corresponds to running the Wake-Sleep algorithm on the
model learned by the greedy training.
A sample from a TRBM trained for at least two weeks on a slightly different dataset: *
A scipy implementation: code
(Note: currently, the readme of the code calls the WRBM
by
the name ABM or ARBM somewhat
inconsistently; there is a chance it will be fixed eventually.)
home