An important note to users with
version 1.0 of the software.

This page gives an overview of the learning methods that have been
contributed to Delve. Under the individual methods you will find
definitions, implementations and results of applying these methods to some of
the Delve datasets. The descriptions should give enough detail that you
could reproduce the results after reading the documentation. The methods are
grouped according to the type of learning to which they are applicable. Some
methods appear in multiple groups. Each method has been given a unique
Delve name.
The results and implentation are contained in tar files for each method. The
tar files can be accessed from the individual page describing the method or
by clicking on the results anchor below.
Supervised Learning: Regression
- 1nn-1 download results
One nearest
neighbour based on Euclidean distance. Contributed by Radford Neal.
- base-1 download results
Provides a
base-line of performance that can be obtained by completely ignoring the inputs
attributes, basing prediction solely on simple statistics regarding the targets
in training cases - namely, the mean and median of the training targets.
Contributed by Radford Neal.
- gp-map-1 download results
Gaussian processes
for regression trained with a maximum-aposteriori approach implemented with
conjugate gradient optimization. Contributed by Carl Edward Rasmussen.
- gp-mc-1 download results
Gaussian processes for
regression trained using a fully Bayesian approach using an MCMC
implementation. Contributed by Carl
Edward Rasmussen.
- hme-el-1 download
results
Hierarchical mixtures-of-experts trained using ensemble learning. Contributed by
Steve Waterhouse
- hme-ese-1 download
Hierarchical mixtures-of-experts trained using early stopping. Contributed by
Steve Waterhouse
- hme-grow-1 download results
Hierarchical mixtures-of-experts trained using growing and early
stopping. Contributed by
Steve Waterhouse
- knn-cv-1 download results
K-nearest neighbours for regression using leave-one-out cross-validation to
select K. The uniformly weighted average of the neighbours are used for
predictions. Contributed by Carl
Edward Rasmussen.
- lin-1 download results
Linear least
squares regression. Contributed by Carl Edward Rasmussen.
- mars3.6-bag-1 download results
Multivariate Adaptive Regression Splines (MARS) version 3.6 with Bagging.
MARS was written by Jerome
Friedman; a front-end for Bagging was added by
Michael Revow.
- me-el-1 download results
Mixtures-of-experts trained using ensemble learning. Contributed by
Steve Waterhouse
- me-ese-1 download results
Mixtures-of-experts trained using early stopping. Contributed by
Steve Waterhouse
- mlp-bgd-1 download results
mlp-bgd-2 download results
mlp-bgd-2b download results
mlp-bgd-3 download results
Variations on multilayer perceptron networks trained by batch gradient
descent with early stopping ensembles, with and without methods for
adapting to varying relevance of inputs. These
methods were written by Radford Neal.
- mlp-ese-1 download results
Multilayer perceptron ensembles trained with early stopping. The ensemble
consists of networks with identical architectures; fully connected with a
single hidden layer of hyperbolic tangent units. Trained using conjugate
gradient optimization. Contributed by
Carl Edward Rasmussen.
- mlp-mc-1 download results
Multilayer perceptron networks trained by Bayesian learning using MCMC
methods. Designed by
Carl Edward Rasmussen,
using software written by Radford Neal.
- mlp-mc-2 download results
mlp-mc-2b download results
mlp-mc-3 download results
mlp-mc-3b download results
mlp-mc-4 download results
mlp-mc-4b download results
Variations on multilayer perceptron networks trained by Bayesian learning
using MCMC methods, with and without Automatic Relevance Determination. These
methods were written by Radford Neal.
- mlp-mdl-vh download results
Multilayer perceptron networks trained using Minimum Description Length (MDL)
principals with a variable number of hidden units. Contributed by
Michael Revow.
- mlp-mdl-3h download results
Multilayer perceptron networks trained using Minimum Description Length (MDL)
principals with fixed number of hidden units. Contributed by
Michael Revow.
- mlp-wd-1 download results
Multilayer perceptron networks trained using weight decay.
Michael Revow.
Supervised Learning: Classification
- 1nn-1 download results
One nearest
neighbour based on Euclidean distance. Contributed by Radford Neal.
- base-1 download results
Provides a
base-line of performance that can be obtained by completely ignoring the input
attributes, basing prediction solely on simple statistics regarding the targets
in training cases - namely, the frequencies of target classes. Contributed by
Radford Neal.
- cart-1 download results
A basic Classification and regression
tree implementation.
- knn-class-1 download results
A K-nearest neighbour implementation
for classification.
- mlp-bgd-1 download results
mlp-bgd-2 download results
Variations on multilayer perceptron networks trained by batch gradient
descent with early stopping ensembles, with and without methods for
adapting to varying relevance of inputs. These
methods were written by Radford Neal.
- mlp-mc-2 download results
mlp-mc-3 download results
mlp-mc-4 download results
Variations on multilayer perceptron networks trained by Bayesian learning
using MCMC methods, with and without Automatic Relevance Determination. These
methods were written by Radford Neal.
Unsupervised Learning
Currently there are no methods for unsupervised learning in Delve.