
home page
people
research
publications
seminars
travel
search
UCL
| |
Using
Expectation-Maximization for Reinforcement Learning
Peter
Dayan
Department of Brain and Cognitive Sciences
CBCL, MIT, Cambridge, MA
Geoffrey Hinton
Department of Computer Science
University of Toronto
Canada
Abstract
We discuss Hinton's (1989) relative payoff procedure
(RPP), a static reinforcement learning algorithm whose foundation is not stochastic
gradient ascent. We show circumstances under which applying the RPP is guaranteed to
increase the mean return, even though it can make large changes in the values of the
parameters. The proof is based on a mapping between the RPP and a form of the
expectation-maximization procedure of Dempster, Laird, and Rubin (1977).
Download [ps.gz] [pdf]
Neural
Computation (1997) 9:2, 271-278
[home page] [publications]
|