# Philosophy of Bayesian Inference

Bayesian inference is an approach to statistics in which all forms of uncertainty are expressed in terms of probability.

A Bayesian approach to a problem starts with the formulation of a model that we hope is adequate to describe the situation of interest. We then formulated a prior distribution over the unknown parameters of the model, which is meant to capture our beliefs about the situation before seeing the data. After observing some data, we apply Bayes' Rule to obtain a posterior distribution for these unknowns, which takes account of both the prior and the data. From this posterior distribution we can compute predictive distributions for future observations.

This theoretically simple process can be justified as the proper approach to uncertain inference by various arguments involving consistency with clear principles of rationality. Despite this, many people are uncomfortable with the Bayesian approach, often because they view the selection of a prior as being arbitrary and subjective. It is indeed subjective, but for this very reason it is not arbitrary. There is (in theory) just one correct prior, the one that captures your (subjective) prior beliefs. In contrast, other statistical methods are truly arbitrary, in that there are usually many methods that are equally good according to non-Bayesian criteria of goodness, with no principled way of choosing between them.

Unfortunately, many "Bayesians" don't really think in true Bayesian terms. One can therefore find many pseudo-Bayesian procedures in the literature, in which models and priors are used that cannot be taken seriously as expressions of prior belief. Some examples of such pseudo-Bayesian methods:

• Using "technological" or "reference" priors chosen solely for convenience, or out of a mis-guided desire for pseudo-objectivity.
• Using Bayesian model comparison when you know ahead of time that some (maybe all!) of the models you consider can't possible be good descriptions of reality.
• Using priors that vary with the amount of data that you have collected.
These procedures have no real Bayesian justification, and since they are usually offered with no other justification either, I consider them to be highly dubious.

In some cases, it may indeed be difficult to use a true Bayesian method. We may not be sufficiently skilled at translating our subjective prior beliefs into a mathematically formulated model and prior. One of my interests is in addressing this difficulty, particularly when dealing with models that have an infinite number of parameters. There may also computational difficulties with the Bayesian approach. Many of these can be addressed using Markov chain Monte Carlo methods, which are another main focus of my research.

When Bayesian methods are too difficult to apply, I think we should use non-Bayesian methods, justified by non-Bayesian criteria, rather than pretend we are being Bayesian when we aren't.

Back to Radford Neal's research on Bayesian inference
Back to Radford Neal's research interests