Week 8: Bayesian Regression

[Powerpoint][jupyter][google colab][reveal]

Abstract: Bayesian formalisms deal with uncertainty in parameters,

Setup

[edit]

notutils

[edit]

This small package is a helper package for various notebook utilities used below.

The software can be installed using

%pip install notutils

from the command prompt where you can access your python installation.

The code is also available on GitHub: https://github.com/lawrennd/notutils

Once notutils is installed, it can be imported in the usual manner.

import notutils

pods

[edit]

In Sheffield we created a suite of software tools for ‘Open Data Science’. Open data science is an approach to sharing code, models and data that should make it easier for companies, health professionals and scientists to gain access to data science techniques.

You can also check this blog post on Open Data Science.

The software can be installed using

%pip install pods

from the command prompt where you can access your python installation.

The code is also available on GitHub: https://github.com/lawrennd/ods

Once pods is installed, it can be imported in the usual manner.

import pods

mlai

[edit]

The mlai software is a suite of helper functions for teaching and demonstrating machine learning algorithms. It was first used in the Machine Learning and Adaptive Intelligence course in Sheffield in 2013.

The software can be installed using

%pip install mlai

from the command prompt where you can access your python installation.

The code is also available on GitHub: https://github.com/lawrennd/mlai

Once mlai is installed, it can be imported in the usual manner.

import mlai

Overdetermined System

We can motivate the introduction of probability by considering systems where there were more observations than unknowns. In particular we can consider the simple fitting of the gradient and an offset of a line, \[ y= mx+c. \] What happens if we have three pairs of observations of \(x\) and \(y\), \(\{x_i, y_i\}_{i=1}^3\). The issue can be solved by introducing a type of slack variable, \(\epsilon_i\), known as noise, such that for each observation we had the equation, \[ y_i = mx_i + c + \epsilon_i. \]

Underdetermined System

[edit]

What about the situation where you have more parameters than data in your simultaneous equation? This is known as an underdetermined system. In fact, this set up is in some sense easier to solve, because we don’t need to think about introducing a slack variable (although it might make a lot of sense from a modelling perspective to do so).

The way Laplace proposed resolving an overdetermined system, was to introduce slack variables, \(\epsilon_i\), which needed to be estimated for each point. The slack variable represented the difference between our actual prediction and the true observation. This is known as the residual. By introducing the slack variable, we now have an additional \(n\) variables to estimate, one for each data point, \(\{\epsilon_i\}\). This turns the overdetermined system into an underdetermined system. Introduction of \(n\) variables, plus the original \(m\) and \(c\) gives us \(n+2\) parameters to be estimated from \(n\) observations, which makes the system underdetermined. However, we then made a probabilistic assumption about the slack variables, we assumed that the slack variables were distributed according to a probability density. And for the moment we have been assuming that density was the Gaussian, \[\epsilon_i \sim \mathcal{N}\left(0,\sigma^2\right),\] with zero mean and variance \(\sigma^2\).

The follow up question is whether we can do the same thing with the parameters. If we have two parameters and only one unknown, can we place a probability distribution over the parameters as we did with the slack variables? The answer is yes.

Underdetermined System

Figure: An underdetermined system can be fit by considering uncertainty. Multiple solutions are consistent with one specified point.

A Philosophical Dispute: Probabilistic Treatment of Parameters?

[edit]

From a philosophical perspective placing a probability distribution over the parameters is known as the Bayesian approach. This is because Thomas Bayes, in a 1763 essay published at the Royal Society introduced the Bernoulli distribution with a probabilistic interpretation for the parameters. Later statisticians such as Ronald Fisher objected to the use of probability distributions for parameters, and so in an effort to discredit the approach the referred to it as Bayesian. However, the earliest practioners of modelling, such as Laplace applied the approach as the most natural thing to do for dealing with unknowns (whether they were parameters or variables). Unfortunately, this dispute led to a split in the modelling community that still has echoes today. It is known as the Bayesian vs Frequentist controversy. From my own perspective, I think that it is a false dichotomy, and that the two approaches are actually complementary. My own focus research focus is on modelling and in that context, the use of probability is vital. For frequenstist statisticians, such as Fisher, the emphasis was on the value of the evidence in the data for a particular hypothesis. This is known as hypothesis testing. The two approaches can be unified because one of the most important approaches to hypothesis testing is to compute the ratio of the likelihoods, and the result of applying a probability distribution to the parameters is merely to arrive at a different form of the likelihood.

The Bayesian Controversy: Philosophical Underpinnings

A segment from the lecture in 2012 on philsophical underpinnings.

Figure: The philosophical underpinnings of uncertainty, as discussed in 2012 MLAI lecture.

Further Reading

  • Section 1.2.3 (pg 21–24) of Bishop (2006)

  • Sections 3.1-3.4 (pg 95-117) of Rogers and Girolami (2011)

  • Section 1.2.3 (pg 21–24) of Bishop (2006)

  • Section 1.2.6 (start from just past eq 1.64 pg 30-32) of Bishop (2006)

Sum of Squares and Probability

[edit]

In the overdetermined system we introduced a new set of slack variables, \(\{\epsilon_i\}_{i=1}^n\), on top of our parameters \(m\) and \(c\). We dealt with the variables by placing a probability distribution over them. This gives rise to the likelihood and for the case of Gaussian distributed variables, it gives rise to the sum of squares error. It was Gauss who first made this connection in his volume on Theoria Motus Corprum Coelestium (Gauss, 1809) (written in Latin)

Figure: Gauss’s book Theoria Motus Corprum Coelestium (Gauss, 1809) motivates the use of least squares through a probabilistic forumation.

The relevant section roughly translates as

… It is clear, that for the product \(\Omega = h^\mu \pi^{-\frac{1}{2}\mu} e^{-hh(vv + v^\prime v^\prime + v^{\prime\prime} v^{\prime\prime} + \dots)}\) to be maximised the sum \(vv + v ^\prime v^\prime + v^{\prime\prime} v^{\prime\prime} + \text{etc}.\) ought to be minimized. Therefore, the most probable values of the unknown quantities \(p , q, r , s \text{etc}.\), should be that in which the sum of the squares of the differences between the functions \(V, V^\prime, V^{\prime\prime} \text{etc}\), and the observed values is minimized, for all observations of the same degree of precision is presumed.

It’s on the strength of this paragraph that the density is known as the Gaussian, despite the fact that four pages later Gauss credits the necessary integral for the density to Laplace, and it was also Laplace that did a lot of the original work on dealing with these errors through probability. Stephen Stigler’s book on the measurement of uncertainty before 1900 (Stigler, 1999) has a nice chapter on this.

Figure: Gauss credits Laplace with the invention of the Gaussian density here.

where the crediting to the Laplace is about halfway through the last paragraph. This book was published in 1809, four years after in an appendix to one of his chapters on the orbit of comets. Gauss goes on to make a claim for priority on the method on page 221 (towards the end of the first paragraph …).

Figure: Gauss places his claim for priority on least squares over Legendre who published first. Gauss claims he used least squares for his prediction of the location of Ceres.

The Bayesian Approach

[edit]

Now we will study Bayesian approaches to regression. In the Bayesian approach we define a prior density over our parameters, \(m\) and \(c\) or more generally \(\mathbf{ w}\). This prior distribution gives us a range of expected values for our parameter before we have seen the data. The object in Bayesian inference is to then compute theposterior density which is the effect on the density of having observed the data. In standard probability notation we write the prior distribution as, \[ p(\mathbf{ w}), \] so it is the marginal distribution for the parameters, i.e. the distribution we have for the parameters without any knowledge about the data. The posterior distribution is written as, \[ p(\mathbf{ w}|\mathbf{ y}, \mathbf{X}). \] So the posterior distribution is the conditional distribution for the parameters given the data (which in this case consists of pairs of observations including response variables (or targets), \(y_i\), and covariates (or inputs) \(\mathbf{ x}_i\). Where here we are allowing the inputs to be multivariate.

The posterior is recovered from the prior using Bayes’ rule. Which is simply a rewriting of the product rule. We can recover Bayes’ rule as follows. The product rule of probability tells us that the joint distribution is given as the product of the conditional and the marginal. Dropping the inputs from our conditioning for the moment we have, \[ p(\mathbf{ w}, \mathbf{ y})=p(\mathbf{ y}|\mathbf{ w})p(\mathbf{ w}), \] where we see we have related the joint density to the prior density and the likelihood from our previous investigation of regression, \[ p(\mathbf{ y}|\mathbf{ w}) = \prod_{i=1}^n\mathcal{N}\left(y_i|\mathbf{ w}^\top \mathbf{ x}_i, \sigma^2\right) \] which arises from the assumption that our observation is given by \[ y_i = \mathbf{ w}^\top \mathbf{ x}_i + \epsilon_i. \] In other words this is the Gaussian likelihood we have been fitting by minimizing the sum of squares. Have a look at the session on multivariate regression as a reminder.

We’ve introduce the likelihood, but we don’t have relationship with the posterior, however, the product rule can also be written in the following way \[ p(\mathbf{ w}, \mathbf{ y}) = p(\mathbf{ w}|\mathbf{ y})p(\mathbf{ y}), \] where here we have simply used the opposite conditioning. We’ve already introduced the posterior density above. This is the density that represents our belief about the parameters after observing the data. This is combined with the marginal likelihood, sometimes also known as the evidence. It is the marginal likelihood, because it is the original likelihood of the data with the parameters marginalised, \(p(\mathbf{ y})\). Here it’s conditioned on nothing, but in practice you should always remember that everything here is conditioned on things like model choice: which set of basis functions. Because it’s a regression problem, its also conditioned on the inputs. Using the equalitybetween the two different forms of the joint density we recover \[ p(\mathbf{ w}|\mathbf{ y}) = \frac{p(\mathbf{ y}|\mathbf{ w})p(\mathbf{ w})}{p(\mathbf{ y})} \] where we divided both sides by \(p(\mathbf{ y})\) to recover this result. Let’s re-introduce the conditioning on the input locations (or covariates), \(\mathbf{X}\) to write the full form of Bayes’ rule for the regression problem. \[ p(\mathbf{ w}|\mathbf{ y}, \mathbf{X}) = \frac{p(\mathbf{ y}|\mathbf{ w}, \mathbf{X})p(\mathbf{ w})}{p(\mathbf{ y}|\mathbf{X})} \] where the posterior density for the parameters given the data is \(p(\mathbf{ w}|\mathbf{ y}, \mathbf{X})\), the marginal likelihood is \(p(\mathbf{ y}|\mathbf{X})\), the prior density is \(p(\mathbf{ w})\) and our original regression likelihood is given by \(p(\mathbf{ y}|\mathbf{ w}, \mathbf{X})\). It turns out that to compute the posterior the only things we need to do are define the prior and the likelihood. The other term on the right hand side can be computed by the sum rule. It is one of the key equations of Bayesian inference, the expectation of the likelihood under the prior, this process is known as marginalisation, \[ p(\mathbf{ y}|\mathbf{X}) = \int p(\mathbf{ y}|\mathbf{ w},\mathbf{X})p(\mathbf{ w}) \text{d}\mathbf{ w} \] I like the term marginalisation, and the description of the probability as the marginal likelihood, because (for me) it somewhat has the implication that the variable name has been removed, and (perhaps) written in the margin. Marginalisation of a variable goes from a likelihood where the variable is in place, to a new likelihood where all possible values of that variable (under the prior) have been considered and weighted in the integral. This implies that all we need for specifying our model is to define the likelihood and the prior. We already have our likelihood from our earlier discussion, so our focus now turns to the prior density.

Prior Distribution

[edit]

The tradition in Bayesian inference is to place a probability density over the parameters of interest in your model. This choice is made regardless of whether you generally believe those parameters to be stochastic or deterministic in origin. In other words, to a Bayesian, the modelling treatment does not differentiate between epistemic and aleatoric uncertainty. For linear regression we could consider the following Gaussian prior on the intercept parameter, \[c \sim \mathcal{N}\left(0,\alpha_1\right)\] where \(\alpha_1\) is the variance of the prior distribution, its mean being zero.

Posterior Distribution

The prior distribution is combined with the likelihood of the data given the parameters \(p(y|c)\) to give the posterior via Bayes’ rule, \[ p(c|y) = \frac{p(y|c)p(c)}{p(y)} \] where \(p(y)\) is the marginal probability of the data, obtained through integration over the joint density, \(p(y, c)=p(y|c)p(c)\). Overall the equation can be summarized as, \[ \text{posterior} = \frac{\text{likelihood}\times \text{prior}}{\text{marginal likelihood}}. \]

Figure: Combining a Gaussian likelihood with a Gaussian prior to form a Gaussian posterior

Another way of seeing what’s going on is to note that the numerator of Bayes’ rule merely multiplies the likelihood by the prior. The denominator, is not a function of \(c\). So the functional form is entirely determined by the multiplication of prior and likelihood. This has the effect of ensuring that the posterior only has probability mass in regions where both the prior and the likelihood have probability mass.

The marginal likelihood, \(p(y)\), operates to ensure that the distribution is normalised.

For the Gaussian case, the normalisation of the posterior can be performed analytically. This is because both the prior and the likelihood have the form of an exponentiated quadratic, \[ \exp(a^2)\exp(b^2) = \exp(a^2 + b^2), \] and the properties of the exponential mean that the product of two exponentiated quadratics is also an exponentiated quadratic. That implies that the posterior is also Gaussian, because a normalized exponentiated quadratic is a Gaussian distribution.1

\[p(c) = \frac{1}{\sqrt{2\pi\alpha_1}} \exp\left(-\frac{1}{2\alpha_1}c^2\right)\] \[p(\mathbf{ y}|\mathbf{ x}, c, m, \sigma^2) = \frac{1}{\left(2\pi\sigma^2\right)^{\frac{n}{2}}} \exp\left(-\frac{1}{2\sigma^2}\sum_{i=1}^n(y_i - mx_i - c)^2\right)\]

\[p(c| \mathbf{ y}, \mathbf{ x}, m, \sigma^2) = \frac{p(\mathbf{ y}|\mathbf{ x}, c, m, \sigma^2)p(c)}{p(\mathbf{ y}|\mathbf{ x}, m, \sigma^2)}\]

\[p(c| \mathbf{ y}, \mathbf{ x}, m, \sigma^2) = \frac{p(\mathbf{ y}|\mathbf{ x}, c, m, \sigma^2)p(c)}{\int p(\mathbf{ y}|\mathbf{ x}, c, m, \sigma^2)p(c) \text{d} c}\]

\[p(c| \mathbf{ y}, \mathbf{ x}, m, \sigma^2) \propto p(\mathbf{ y}|\mathbf{ x}, c, m, \sigma^2)p(c)\]

\[\begin{aligned} \log p(c | \mathbf{ y}, \mathbf{ x}, m, \sigma^2) =&-\frac{1}{2\sigma^2} \sum_{i=1}^n(y_i-c - mx_i)^2-\frac{1}{2\alpha_1} c^2 + \text{const}\\ = &-\frac{1}{2\sigma^2}\sum_{i=1}^n(y_i-mx_i)^2 -\left(\frac{n}{2\sigma^2} + \frac{1}{2\alpha_1}\right)c^2\\ & + c\frac{\sum_{i=1}^n(y_i-mx_i)}{\sigma^2}, \end{aligned}\]

complete the square of the quadratic form to obtain \[\log p(c | \mathbf{ y}, \mathbf{ x}, m, \sigma^2) = -\frac{1}{2\tau^2}(c - \mu)^2 +\text{const},\] where \(\tau^2 = \left(n\sigma^{-2} +\alpha_1^{-1}\right)^{-1}\) and \(\mu = \frac{\tau^2}{\sigma^2} \sum_{i=1}^n(y_i-mx_i)\).

The Joint Density

[edit]
  • Really want to know the joint posterior density over the parameters \(c\) and \(m\).
  • Could now integrate out over \(m\), but it’s easier to consider the multivariate case.

Two Dimensional Gaussian

[edit]

Consider the distribution of height (in meters) of an adult male human population. We will approximate the marginal density of heights as a Gaussian density with mean given by \(1.7\text{m}\) and a standard deviation of \(0.15\text{m}\), implying a variance of \(\sigma^2=0.0225\), \[ p(h) \sim \mathcal{N}\left(1.7,0.0225\right). \] Similarly, we assume that weights of the population are distributed a Gaussian density with a mean of \(75 \text{kg}\) and a standard deviation of \(6 kg\) (implying a variance of 36), \[ p(w) \sim \mathcal{N}\left(75,36\right). \]

Figure: Gaussian distributions for height and weight.

Independence Assumption

First of all, we make an independence assumption, we assume that height and weight are independent. The definition of probabilistic independence is that the joint density, \(p(w, h)\), factorizes into its marginal densities, \[ p(w, h) = p(w)p(h). \] Given this assumption we can sample from the joint distribution by independently sampling weights and heights.

Figure: Samples from independent Gaussian variables that might represent heights and weights.

In reality height and weight are not independent. Taller people tend on average to be heavier, and heavier people are likely to be taller. This is reflected by the body mass index. A ratio suggested by one of the fathers of statistics, Adolphe Quetelet. Quetelet was interested in the notion of the average man and collected various statistics about people. He defined the BMI to be, \[ \text{BMI} = \frac{w}{h^2} \]To deal with this dependence we now introduce the notion of correlation to the multivariate Gaussian density.

Sampling Two Dimensional Variables

[edit]

Figure: Samples from correlated Gaussian variables that might represent heights and weights.

Independent Gaussians

[edit]

\[ p(w, h) = p(w)p(h) \]

\[ p(w, h) = \frac{1}{\sqrt{2\pi \sigma_1^2}\sqrt{2\pi\sigma_2^2}} \exp\left(-\frac{1}{2}\left(\frac{(w-\mu_1)^2}{\sigma_1^2} + \frac{(h-\mu_2)^2}{\sigma_2^2}\right)\right) \]

\[ p(w, h) = \frac{1}{\sqrt{2\pi\sigma_1^22\pi\sigma_2^2}} \exp\left(-\frac{1}{2}\left(\begin{bmatrix}w \\ h\end{bmatrix} - \begin{bmatrix}\mu_1 \\ \mu_2\end{bmatrix}\right)^\top\begin{bmatrix}\sigma_1^2& 0\\0&\sigma_2^2\end{bmatrix}^{-1}\left(\begin{bmatrix}w \\ h\end{bmatrix} - \begin{bmatrix}\mu_1 \\ \mu_2\end{bmatrix}\right)\right) \]

\[ p(\mathbf{ y}) = \frac{1}{\det{2\pi \mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\mathbf{ y}- \boldsymbol{ \mu})^\top\mathbf{D}^{-1}(\mathbf{ y}- \boldsymbol{ \mu})\right) \]

Correlated Gaussian

Form correlated from original by rotating the data space using matrix \(\mathbf{R}\).

\[ p(\mathbf{ y}) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\mathbf{ y}- \boldsymbol{ \mu})^\top\mathbf{D}^{-1}(\mathbf{ y}- \boldsymbol{ \mu})\right) \]

\[ p(\mathbf{ y}) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\mathbf{R}^\top\mathbf{ y}- \mathbf{R}^\top\boldsymbol{ \mu})^\top\mathbf{D}^{-1}(\mathbf{R}^\top\mathbf{ y}- \mathbf{R}^\top\boldsymbol{ \mu})\right) \]

\[ p(\mathbf{ y}) = \frac{1}{\det{2\pi\mathbf{D}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\mathbf{ y}- \boldsymbol{ \mu})^\top\mathbf{R}\mathbf{D}^{-1}\mathbf{R}^\top(\mathbf{ y}- \boldsymbol{ \mu})\right) \] this gives a covariance matrix: \[ \mathbf{C}^{-1} = \mathbf{R}\mathbf{D}^{-1} \mathbf{R}^\top \]

\[ p(\mathbf{ y}) = \frac{1}{\det{2\pi\mathbf{C}}^{\frac{1}{2}}} \exp\left(-\frac{1}{2}(\mathbf{ y}- \boldsymbol{ \mu})^\top\mathbf{C}^{-1} (\mathbf{ y}- \boldsymbol{ \mu})\right) \] this gives a covariance matrix: \[ \mathbf{C}= \mathbf{R}\mathbf{D} \mathbf{R}^\top \]

The Prior Density

Let’s assume that the prior density is given by a zero mean Gaussian, which is independent across each of the parameters, \[ \mathbf{ w}\sim \mathcal{N}\left(\mathbf{0},\alpha \mathbf{I}\right) \] In other words, we are assuming, for the prior, that each element of the parameters vector, \(w_i\), was drawn from a Gaussian density as follows \[ w_i \sim \mathcal{N}\left(0,\alpha\right) \] Let’s start by assigning the parameter of the prior distribution, which is the variance of the prior distribution, \(\alpha\).

# set prior variance on w
alpha = 4.
# set the order of the polynomial basis set
order = 5
# set the noise variance
sigma2 = 0.01

Further Reading

  • Multivariate Gaussians: Section 2.3 up to top of pg 85 of Bishop (2006)

  • Section 3.3 up to 159 (pg 152–159) of Bishop (2006)

Generating from the Model

[edit]

A very important aspect of probabilistic modelling is to sample from your model to see what type of assumptions you are making about your data. In this case that involves a two stage process.

  1. Sample a candiate parameter vector from the prior.
  2. Place the candidate parameter vector in the likelihood and sample functions conditiond on that candidate vector.
  3. Repeat to try and characterise the type of functions you are generating.

Given a prior variance (as defined above) we can now sample from the prior distribution and combine with a basis set to see what assumptions we are making about the functions a priori (i.e. before we’ve seen the data). Firstly we compute the basis function matrix. We will do it both for our training data, and for a range of prediction locations (x_pred).

import numpy as np
import pods
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']
num_data = x.shape[0]
num_pred_data = 100 # how many points to use for plotting predictions
x_pred = np.linspace(1890, 2016, num_pred_data)[:, None] # input locations for predictions

now let’s build the basis matrices. We define the polynomial basis as follows.

def polynomial(x, num_basis=2, loc=0., scale=1.):
    degree=num_basis-1
    degrees = np.arange(degree+1)
    return ((x-loc)/scale)**degrees
import mlai
loc=1950
scale=1
degree=4
basis = mlai.Basis(polynomial, number=degree+1, loc=loc, scale=scale)
Phi_pred = basis.Phi(x_pred)
Phi = basis.Phi(x)

Sampling from the Prior

Now we will sample from the prior to produce a vector \(\mathbf{ w}\) and use it to plot a function which is representative of our belief before we fit the data. To do this we are going to use the properties of the Gaussian density and a sample from a standard normal using the function np.random.normal.

Scaling Gaussian-distributed Variables

First, let’s consider the case where we have one data point and one feature in our basis set. In otherwords \(\mathbf{ f}\) would be a scalar, \(\mathbf{ w}\) would be a scalar and \(\boldsymbol{ \Phi}\) would be a scalar. In this case we have \[ f= \phi w \] If \(w\) is drawn from a normal density, \[ w\sim \mathcal{N}\left(\mu_w,c_w\right) \] and \(\phi\) is a scalar value which we are given, then properties of the Gaussian density tell us that \[ \phi w\sim \mathcal{N}\left(\phi\mu_w,\phi^2c_w\right) \] Let’s test this out numerically. First we will draw 200 samples from a standard normal,

w_vec = np.random.normal(size=200)

We can compute the mean of these samples and their variance

print('w sample mean is ', w_vec.mean())
print('w sample variance is ', w_vec.var())

These are close to zero (the mean) and one (the variance) as you’d expect. Now compute the mean and variance of the scaled version,

phi = 7
f_vec = phi*w_vec
print('True mean should be phi*0 = 0.')
print('True variance should be phi*phi*1 = ', phi*phi)
print('f sample mean is ', f_vec.mean())
print('f sample variance is ', f_vec.var())

If you increase the number of samples then you will see that the sample mean and the sample variance begin to converge towards the true mean and the true variance. Obviously adding an offset to a sample from np.random.normal will change the mean. So if you want to sample from a Gaussian with mean mu and standard deviation sigma one way of doing it is to sample from the standard normal and scale and shift the result, so to sample a set of \(w\) from a Gaussian with mean \(\mu\) and variance \(\alpha\), \[w\sim \mathcal{N}\left(\mu,\alpha\right)\] We can simply scale and offset samples from the standard normal.

mu = 4 # mean of the distribution
alpha = 2 # variance of the distribution
w_vec = np.random.normal(size=200)*np.sqrt(alpha) + mu
print('w sample mean is ', w_vec.mean())
print('w sample variance is ', w_vec.var())

Here the np.sqrt is necesssary because we need to multiply by the standard deviation and we specified the variance as alpha. So scaling and offsetting a Gaussian distributed variable keeps the variable Gaussian, but it effects the mean and variance of the resulting variable.

To get an idea of the overall shape of the resulting distribution, let’s do the same thing with a histogram of the results.

Now re-run this histogram with 100,000 samples and check that the both histograms look qualitatively Gaussian.

Sampling from the Prior

Let’s use this way of constructing samples from a Gaussian to check what functions look like a priori. The process will be as follows. First, we sample a random vector \(K\) dimensional from np.random.normal. Then we scale it by \(\sqrt{\alpha}\) to obtain a prior sample of \(\mathbf{ w}\).

K = degree + 1
z_vec = np.random.normal(size=K)
w_sample = z_vec*np.sqrt(alpha)
print(w_sample)

Now we can combine our sample from the prior with the basis functions to create a function,

This shows the recurring problem with the polynomial basis (note the scale on the left hand side!). Our prior allows relatively large coefficients for the basis associated with high polynomial degrees. Because we are operating with input values of around 2000, this leads to output functions of very high values. The fix we have used for this before is to rescale our data before we apply the polynomial basis to it. Above, we set the scale of the basis to 1. Here let’s set it to 100 and try again.

scale = 100.
basis = mlai.Basis(polynomial, number=degree+1, loc=loc, scale=scale)
Phi_pred = basis.Phi(x_pred)
Phi = basis.Phi(x)

Now we need to recompute the basis functions from above,

Now let’s loop through some samples and plot various functions as samples from this system,

The predictions for the mean output can now be computed. We want the expected value of the predictions under the posterior distribution. In matrix form, the predictions can be computed as \[ \mathbf{ f}= \boldsymbol{ \Phi}\mathbf{ w}. \] This involves a matrix multiplication between a fixed matrix \(\boldsymbol{ \Phi}\) and a vector that is drawn from a distribution \(\mathbf{ w}\). Because \(\mathbf{ w}\) is drawn from a distribution, this imples that \(\mathbf{ f}\) should also be drawn from a distribution. There are two distributions we are interested in though. We have just been sampling from the prior distribution to see what sort of functions we get before looking at the data. In Bayesian inference, we need to computer the posterior distribution and sample from that density.

Computing the Posterior

[edit]

We will now attampt to compute the posterior distribution. In the lecture we went through the maths that allows us to compute the posterior distribution for \(\mathbf{ w}\). This distribution is also Gaussian, \[ p(\mathbf{ w}| \mathbf{ y}, \mathbf{ x}, \sigma^2) = \mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu}_w,\mathbf{C}_w\right) \] with covariance, \(\mathbf{C}_w\), given by \[ \mathbf{C}_w= \left(\sigma^{-2}\boldsymbol{ \Phi}^\top \boldsymbol{ \Phi}+ \alpha^{-1}\mathbf{I}\right)^{-1} \] whilst the mean is given by \[ \boldsymbol{ \mu}_w= \mathbf{C}_w\sigma^{-2}\boldsymbol{ \Phi}^\top \mathbf{ y} \] Let’s compute the posterior covariance and mean, then we’ll sample from these densities to have a look at the posterior belief about \(\mathbf{ w}\) once the data has been accounted for. Remember, the process of Bayesian inference involves combining the prior, \(p(\mathbf{ w})\) with the likelihood, \(p(\mathbf{ y}|\mathbf{ x}, \mathbf{ w})\) to form the posterior, \(p(\mathbf{ w}| \mathbf{ y}, \mathbf{ x})\) through Bayes’ rule, \[ p(\mathbf{ w}|\mathbf{ y}, \mathbf{ x}) = \frac{p(\mathbf{ y}|\mathbf{ x}, \mathbf{ w})p(\mathbf{ w})}{p(\mathbf{ y})} \] We’ve looked at the samples for our function \(\mathbf{ f}= \boldsymbol{ \Phi}\mathbf{ w}\), which forms the mean of the Gaussian likelihood, under the prior distribution. I.e. we’ve sampled from \(p(\mathbf{ w})\) and multiplied the result by the basis matrix. Now we will sample from the posterior density, \(p(\mathbf{ w}|\mathbf{ y}, \mathbf{ x})\), and check that the new samples fit do correspond to the data, i.e. we want to check that the updated distribution includes information from the data set. First we need to compute the posterior mean and covariance.

Bayesian Inference in the Univariate Case

This video talks about Bayesian inference across the single parameter, the offset \(c\), illustrating how the prior and the likelihood combine in one dimension to form a posterior.

Figure: Univariate Bayesian inference. Lecture 10 from 2012 MLAI Course.

Multivariate Bayesian Inference

This section of the lecture talks about how we extend the idea of Bayesian inference for the multivariate case. It goes through the multivariate Gaussian and how to complete the square in the linear algebra as we managed below.

Figure: Multivariate Bayesian inference. Lecture 11 from 2012 MLAI course.

The lecture informs us the the posterior density for \(\mathbf{ w}\) is given by a Gaussian density with covariance \[ \mathbf{C}_w = \left(\sigma^{-2}\boldsymbol{ \Phi}^\top \boldsymbol{ \Phi}+ \alpha^{-1}\mathbf{I}\right)^{-1} \] and mean \[ \boldsymbol{ \mu}_w = \mathbf{C}_w\sigma^{-2}\boldsymbol{ \Phi}^\top \mathbf{ y}. \]

Exercise 1

Compute the covariance for \(\mathbf{ w}\) given the training data, call the resulting variable w_cov. Compute the mean for \(\mathbf{ w}\) given the training data. Call the resulting variable w_mean. Assume that \(\sigma^2 = 0.01\)

Olympic Marathon Data

[edit]
  • Gold medal times for Olympic Marathon since 1896.
  • Marathons before 1924 didn’t have a standardized distance.
  • Present results using pace per km.
  • In 1904 Marathon was badly organized leading to very slow times.
Image from Wikimedia Commons http://bit.ly/16kMKHQ

The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. Let’s load in the data and plot.

import numpy as np
import pods
data = pods.datasets.olympic_marathon_men()
x = data['X']
y = data['Y']

offset = y.mean()
scale = np.sqrt(y.var())
yhat = (y - offset)/scale

Figure: Olympic marathon pace times since 1896.

Things to notice about the data include the outlier in 1904, in that year the Olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race meant that participants got lost, and only very few participants completed. More recent years see more consistently quick marathons.

Olympic Data with Bayesian Polynomials

[edit]

Five fold cross validation tests the ability of the model to interpolate.

import mlai
import pods

Figure: Bayesian fit with 26th degree polynomial and negative marginal log likelihood.

Hold Out Validation

For the polynomial fit, we will now look at hold out validation, where we are holding out some of the most recent points. This tests the abilit of our model to extrapolate.

Figure: Bayesian fit with 26th degree polynomial and hold out validation scores.

5-fold Cross Validation

Five fold cross validation tests the ability of the model to interpolate.

Figure: Bayesian fit with 26th degree polynomial and five fold cross validation scores.

Sampling from the Posterior

[edit]

Before we were able to sample the prior values for the mean independently from a Gaussian using np.random.normal and scaling the result. However, observing the data correlates the parameters. Recall this from the first lab where we had a correlation between the offset, \(c\) and the slope \(m\) which caused such problems with the coordinate ascent algorithm. We need to sample from a correlated Gaussian. For this we can use np.random.multivariate_normal.

Now let’s sample several functions and plot them all to see how the predictions fluctuate.

This gives us an idea of what our predictions are. These are the predictions that are consistent with data and our prior. Try plotting different numbers of predictions. You can also try plotting beyond the range of where the data is and see what the functions do there.

Rather than sampling from the posterior each time to compute our predictions, it might be better if we just summarised the predictions by the expected value of the output funciton, \(f(x)\), for any particular input. If we can get formulae for this we don’t need to sample the values of \(f(x)\) we might be able to compute the distribution directly. Fortunately, in the Gaussian case, we can use properties of multivariate Gaussians to compute both the mean and the variance of these samples.

Properties of Gaussian Variables

Gaussian variables have very particular properties, that many other densities don’t exhibit. Perhaps foremost amoungst them is that the sum of any Gaussian distributed set of random variables also turns out to be Gaussian distributed. This property is much rarer than you might expect.

Sum of Gaussian-distributed Variables

The sum of Gaussian random variables is also Gaussian, so if we have a random variable \(y_i\) drawn from a Gaussian density with mean \(\mu_i\) and variance \(\sigma^2_i\), \[ y_i \sim \mathcal{N}\left(\mu_i,\sigma^2_i\right) \] Then the sum of \(K\) independently sampled values of \(y_i\) will be drawn from a Gaussian with mean \(\sum_{i=1}^K \mu_i\) and variance \(\sum_{i=1}^K \sigma_i^2\), \[ \sum_{i=1}^K y_i \sim \mathcal{N}\left(\sum_{i=1}^K \mu_i,\sum_{i=1}^K \sigma_i^2\right). \] Let’s try that experimentally. First let’s generate a vector of samples from a standard normal distribution, \(z \sim \mathcal{N}\left(0,1\right)\), then we will scale and offset them, then keep adding them into a vector y_vec.

Sampling from Gaussians and Summing Up

K = 10 # how many Gaussians to add.
num_samples = 1000 # how many samples to have in y_vec
mus = np.linspace(0, 5, K) # mean values generated linearly spaced between 0 and 5
sigmas = np.linspace(0.5, 2, K) # sigmas generated linearly spaced between 0.5 and 2
y_vec = np.zeros(num_samples)
for mu, sigma in zip(mus, sigmas):
    z_vec = np.random.normal(size=num_samples) # z is from standard normal
    y_vec += z_vec*sigma + mu # add to y z*sigma + mu

# now y_vec is the sum of each scaled and off set z.
print('Sample mean is ', y_vec.mean(), ' and sample variance is ', y_vec.var())
print('True mean should be ', mus.sum())
print('True variance should be ', (sigmas**2).sum(), ' standard deviation ', np.sqrt((sigmas**2).sum())) 

Of course, we can histogram y_vec as well.

Matrix Multiplication of Gaussian Variables

We are interested in what our model is saying about the sort of functions we are observing. The fact that summing of Gaussian variables leads to new Gaussian variables, and scaling of Gaussian variables also leads to Gaussian variables means that matrix multiplication (which is just a series of sums and scales) also leads to Gaussian densities. Matrix multiplication is just adding and scaling together, in the formula, \(\mathbf{ f}= \boldsymbol{ \Phi}\mathbf{ w}\) we can extract the first element from \(\mathbf{ f}\) as \[ f_i = \boldsymbol{ \phi}_i^\top \mathbf{ w} \] where \(\boldsymbol{ \phi}\) is a column vector from the \(i\)th row of \(\boldsymbol{ \Phi}\) and \(f_i\) is the \(i\)th element of \(\mathbf{ f}\).This vector inner product itself merely implies that \[ f_i = \sum_{j=1}^K w_j \phi_{i, j} \] and if we now say that \(w_i\) is Gaussian distributed, then because a scaled Gaussian is also Gaussian, and because a sum of Gaussians is also Gaussian, we know that \(f_i\) is also Gaussian distributed. It merely remains to work out its mean and covariance. We can do this by looking at the expectation under a Gaussian distribution. The expectation of the mean vector is given by \[ \left\langle\mathbf{ f}\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \int \mathbf{ f} \mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right) \text{d}\mathbf{ w}= \int \boldsymbol{ \Phi}\mathbf{ w} \mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right) \text{d}\mathbf{ w}= \boldsymbol{ \Phi}\int \mathbf{ w} \mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right) \text{d}\mathbf{ w}= \boldsymbol{ \Phi}\boldsymbol{ \mu} \]

Which is straightforward. The expectation of \(\mathbf{ f}=\boldsymbol{ \Phi}\mathbf{ w}\) under the Gaussian distribution for \(\mathbf{ f}\) is simply \(\mathbf{ f}=\boldsymbol{ \Phi}\boldsymbol{ \mu}\), where \(\boldsymbol{ \mu}\) is the mean of the Gaussian density for \(\mathbf{ w}\). Because our prior distribution was Gaussian with zero mean, the expectation under the prior is given by \[ \left\langle\mathbf{ f}\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\mathbf{0},\alpha\mathbf{I}\right)} = \mathbf{0} \]

The covariance is a little more complicated. A covariance matrix is defined as \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \left\langle\mathbf{ f}\mathbf{ f}^\top\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} - \left\langle\mathbf{ f}\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)}\left\langle\mathbf{ f}\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)}^\top \] we’ve already computed \(\left\langle\mathbf{ f}\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)}=\boldsymbol{ \Phi}\boldsymbol{ \mu}\) so we can substitute that in to recover \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \left\langle\mathbf{ f}\mathbf{ f}^\top\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} - \boldsymbol{ \Phi}\boldsymbol{ \mu}\boldsymbol{ \mu}^\top \boldsymbol{ \Phi}^\top \]

So we need the expectation of \(\mathbf{ f}\mathbf{ f}^\top\). Substituting in \(\mathbf{ f}= \boldsymbol{ \Phi}\mathbf{ w}\) we have \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \left\langle\boldsymbol{ \Phi}\mathbf{ w}\mathbf{ w}^\top \boldsymbol{ \Phi}^\top\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} - \boldsymbol{ \Phi}\boldsymbol{ \mu}\boldsymbol{ \mu}^\top \boldsymbol{ \Phi}^\top \] \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \boldsymbol{ \Phi}\left\langle\mathbf{ w}\mathbf{ w}^\top\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} \boldsymbol{ \Phi}^\top - \boldsymbol{ \Phi}\boldsymbol{ \mu}\boldsymbol{ \mu}^\top\boldsymbol{ \Phi}^\top \] Which is dependent on the second moment of the Gaussian, \[ \left\langle\mathbf{ w}\mathbf{ w}^\top\right\rangle_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \mathbf{C}+ \boldsymbol{ \mu}\boldsymbol{ \mu}^\top \] that can be substituted in to recover, \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \boldsymbol{ \Phi}\mathbf{C}\boldsymbol{ \Phi}^\top \] so in the case of the prior distribution, where we have \(\mathbf{C}= \alpha \mathbf{I}\) we can write \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\mathbf{0},\alpha \mathbf{I}\right)} = \alpha \boldsymbol{ \Phi}\boldsymbol{ \Phi}^\top \]

This implies that the prior we have suggested for \(\mathbf{ w}\), which is Gaussian with a mean of zero and covariance of \(\alpha \mathbf{I}\) suggests that the distribution for \(\mathbf{ w}\) is also Gaussian with a mean of zero and covariance of \(\alpha \boldsymbol{ \Phi}\boldsymbol{ \Phi}^\top\). Since our observed output, \(\mathbf{ y}\), is given by a noise corrupted variation of \(\mathbf{ f}\), the final distribution for \(\mathbf{ y}\) is given as \[ \mathbf{ y}= \mathbf{ f}+ \boldsymbol{ \epsilon} \] where the noise, \(\boldsymbol{ \epsilon}\), is sampled from a Gaussian density: \(\boldsymbol{ \epsilon}\sim \mathcal{N}\left(\mathbf{0},\sigma^2\mathbf{I}\right)\). So, in other words, we are taking a Gaussian distributed random value \(\mathbf{ f}\), \[ \mathbf{ f}\sim \mathcal{N}\left(\mathbf{0},\alpha\boldsymbol{ \Phi}\boldsymbol{ \Phi}^\top\right) \] and adding to it another Gaussian distributed value, \(\boldsymbol{ \epsilon}\sim \mathcal{N}\left(\mathbf{0},\sigma^2\mathbf{I}\right)\), to form our data observations, \(\mathbf{ y}\). Once again the sum of two (multivariate) Gaussian distributed variables is also Gaussian, with a mean given by the sum of the means (both zero in this case) and the covariance given by the sum of the covariances. So we now have that the marginal likelihood for the data, \(p(\mathbf{ y})\) is given by \[ p(\mathbf{ y}) = \mathcal{N}\left(\mathbf{ y}|\mathbf{0},\alpha \boldsymbol{ \Phi}\boldsymbol{ \Phi}^\top + \sigma^2\mathbf{I}\right) \] This is our implicit assumption for \(\mathbf{ y}\) given our prior assumption for \(\mathbf{ w}\).

Marginal Likelihood

[edit]
  • The marginal likelihood can also be computed, it has the form: \[ p(\mathbf{ y}|\mathbf{X}, \sigma^2, \alpha) = \frac{1}{(2\pi)^\frac{n}{2}\left|\mathbf{K}\right|^\frac{1}{2}} \exp\left(-\frac{1}{2} \mathbf{ y}^\top \mathbf{K}^{-1} \mathbf{ y}\right) \] where \(\mathbf{K}= \alpha \boldsymbol{ \Phi}\boldsymbol{ \Phi}^\top + \sigma^2 \mathbf{I}\).

  • So it is a zero mean \(n\)-dimensional Gaussian with covariance matrix \(\mathbf{K}\).

Computing the Mean and Error Bars of the Functions

[edit]

These ideas together, now allow us to compute the mean and error bars of the predictions. The mean prediction, before corrupting by noise is given by, \[ \mathbf{ f}= \boldsymbol{ \Phi}\mathbf{ w} \] in matrix form. This gives you enough information to compute the predictive mean.

Exercise 2

Compute the predictive mean for the function at all the values of the basis function given by Phi_pred. Call the vector of predictions \mappingFunction_pred_mean. Plot the predictions alongside the data. We can also compute what the training error was. Use the output from your model to compute the predictive mean, and then compute the sum of squares error of that predictive mean. \[ E = \sum_{i=1}^n(y_i - \langle f_i\rangle)^2 \] where \(\langle f_i\rangle\) is the expected output of the model at point \(x_i\).

Computing Error Bars

Finally, we can compute error bars for the predictions. The error bars are the standard deviations of the predictions for \(\mathbf{ f}=\boldsymbol{ \Phi}\mathbf{ w}\) under the posterior density for \(\mathbf{ w}\). The standard deviations of these predictions can be found from the variance of the prediction at each point. Those variances are the diagonal entries of the covariance matrix. We’ve already computed the form of the covariance under Gaussian expectations, \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu},\mathbf{C}\right)} = \boldsymbol{ \Phi}\mathbf{C}\boldsymbol{ \Phi}^\top \] which under the posterior density is given by \[ \text{cov}\left(\mathbf{ f}\right)_{\mathcal{N}\left(\mathbf{ w}|\boldsymbol{ \mu}_w,\mathbf{C}_w\right)} = \boldsymbol{ \Phi}\mathbf{C}_w \boldsymbol{ \Phi}^\top \]

Exercise 3

The error bars are given by computing the standard deviation of the predictions, \(f\). For a given prediction \(f_i\) the variance is \(\text{var}(f_i) = \langle f_i^2\rangle - \langle f_i \rangle^2\). This is given by the diagonal element of the covariance of \(\mathbf{ f}\), \[ \text{var}(f_i) = \boldsymbol{ \phi}_{i, :}^\top \mathbf{C}_w \boldsymbol{ \phi}_{i, :} \] where \(\boldsymbol{ \phi}_{i, :}\) is the basis vector associated with the input location, \(\mathbf{ x}_i\).

Plot the mean function and the error bars for your basis.

Validation

Now we will test the generalisation ability of these models. Firstly we are going to use hold out validation to attempt to see which model is best for extrapolating.

Exercise 4

Now split the data into training and hold out validation sets. Hold out the data for years after 1980. Compute the predictions for different model orders between 0 and 8. Find the model order which fits best according to hold out validation. Is it the same as the maximum likelihood result fom last week?

Exercise 5

Now we will use leave one out cross validation to attempt to see which model is best at interpolating. Do you get the same result as for hold out validation? Compare plots of the hold out validation area for different degrees and the cross validation error for different degrees. Why are they so different? Select a suitable polynomial for characterising the differences in the predictions. Plot the mean function and the error bars for the full data set (to represent the leave one out solution) and the training data from the hold out experiment. Discuss your answer.

Further Reading

  • Section 3.7–3.8 (pg 122–133) of Rogers and Girolami (2011)

  • Section 3.4 (pg 161–165) of Bishop (2006)

Thanks!

For more information on these subjects and more you might want to check the following resources.

References

Bishop, C.M., 2006. Pattern recognition and machine learning. springer.
Gauss, C.F., 1809. Theoria motus corporum coelestium in sectionibus conicis solem ambientium. F. Perthes und I. H. Besser, Hamburg.
Rogers, S., Girolami, M., 2011. A first course in machine learning. CRC Press.
Stigler, S.M., 1999. Statistics on the table: The history of statistical concepts and methods. harvard, Cambridge, MA.