Variational Learning for Switching State-Space Models
Zoubin Ghahramani and Geoffrey E. Hinton
Gatsby Computational Neuroscience Unit
University College London
London WC1N 3AR, U.K.
Abstract
We introduce a new statistical model for time series which
iteratively segments data into regimes with approximately linear dynamics and learns the
parameters of each of these linear regimes. This model combines and generalizes two of the
most widely used stochastic time series models---hidden Markov models and linear dynamical
systems---and is closely related to models that are widely used in the control and
econometrics literatures. It can also be derived by extending the mixture of experts
neural network (Jacobs et al, 1991) to its fully dynamical version, in which both expert
and gating networks are recurrent. Inferring the posterior probabilities of the hidden
states of this model is computationally intractable, and therefore the exact Expectation
Maximization (EM) algorithm cannot be applied. However, we present a variational
approximation that maximizes a lower bound on the log likelihood and makes use of both the
forward--backward recursions for hidden Markov models and the Kalman filter recursions for
linear dynamical systems. We tested the algorithm both on artificial data sets and on a
natural data set of respiration force from a patient with sleep apnea. The results suggest
that variational approximations are a viable method for inference and learning in
switching state-space models.
Download [ps.gz] [pdf]
Extended version of Technical Report CRG-TR-96-3 (1996)
Submitted for Publication (1998)
[home page] [publications]