Linear Algebra and Linear Regression

Review

  • Last time: Looked at objective functions for movie recommendation.
  • Minimized sum of squares objective by steepest descent and stochastic gradients.
  • This time: explore least squares for regression.

Regression Examples

  • Predict a real value, \(y_i\) given some inputs \(\mathbf{ x}_i\).
  • Predict quality of meat given spectral measurements (Tecator data).
  • Radiocarbon dating, the C14 calibration curve: predict age given quantity of C14 isotope.
  • Predict quality of different Go or Backgammon moves given expert rated training data.

Olympic 100m Data

  • Gold medal times for Olympic 100 m runners since 1896.
  • One of a number of Olypmic data sets collected by Rogers and Girolami (2011).

Olympic 100m Data

Olympic Marathon Data

  • Gold medal times for Olympic Marathon since 1896.
  • Marathons before 1924 didn’t have a standardized distance.
  • Present results using pace per km.
  • In 1904 Marathon was badly organized leading to very slow times.
Image from Wikimedia Commons http://bit.ly/16kMKHQ

Olympic Marathon Data

What is Machine Learning?

What is Machine Learning?

\[ \text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

  • data : observations, could be actively or passively acquired (meta-data).
  • model : assumptions, based on previous experience (other data! transfer learning etc), or beliefs about the regularities of the universe. Inductive bias.
  • prediction : an action to be taken or a categorization or a quality score.

What is Machine Learning?

\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]

  • To combine data with a model need:
  • a prediction function \(f(\cdot)\) includes our beliefs about the regularities of the universe
  • an objective function \(E(\cdot)\) defines the cost of misprediction.

Sum of Squares Error

Regression: Linear Releationship

\[y_i = m x_i + c\]

  • \(y_i\) : winning pace.

  • \(x_i\) : year of Olympics.

  • \(m\) : rate of improvement over time.

  • \(c\) : winning time at year 0.

Overdetermined System

\(y= mx+ c\)

point 1: \(x= 1\), \(y=3\) \[ 3 = m + c \]

point 2: \(x= 3\), \(y=1\) \[ 1 = 3m + c \]

point 3: \(x= 2\), \(y=2.5\) \[ 2.5 = 2m + c \]

Pierre-Simon Laplace

Laplace’s Gremlin

Latent Variables

\(y= mx+ c + \epsilon\)

point 1: \(x= 1\), \(y=3\) [ 3 = m + c + _1 ]

point 2: \(x= 3\), \(y=1\) [ 1 = 3m + c + _2 ]

point 3: \(x= 2\), \(y=2.5\) [ 2.5 = 2m + c + _3 ]

A Probabilistic Process

Set the mean of Gaussian to be a function. \[ p\left(y_i|x_i\right)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp \left(-\frac{\left(y_i-f\left(x_i\right)\right)^{2}}{2\sigma^2}\right). \]

This gives us a ‘noisy function’.

This is known as a stochastic process.

The Gaussian Density

  • Perhaps the most common probability density.

\[\begin{align} p(y| \mu, \sigma^2) & = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(y- \mu)^2}{2\sigma^2}\right)\\& \buildrel\triangle\over = \mathcal{N}\left(y|\mu,\sigma^2\right) \end{align}\]

Gaussian Density

Gaussian Density

\[ \mathcal{N}\left(y|\mu,\sigma^2\right) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(y-\mu)^2}{2\sigma^2}\right) \]

\(\sigma^2\) is the variance of the density and \(\mu\) is the mean.

Two Important Gaussian Properties

Sum of Gaussians

Sum of Gaussian variables is also Gaussian.

\[y_i \sim \mathcal{N}\left(\mu_i,\sigma_i^2\right)\]

And the sum is distributed as

\[ \sum_{i=1}^{n} y_i \sim \mathcal{N}\left(\sum_{i=1}^n\mu_i,\sum_{i=1}^n\sigma_i^2\right) \]

(Aside: As sum increases, sum of non-Gaussian, finite variance variables is also Gaussian because of central limit theorem.)

Scaling a Gaussian

Scaling a Gaussian leads to a Gaussian.

\[y\sim \mathcal{N}\left(\mu,\sigma^2\right)\]

And the scaled variable is distributed as

\[wy\sim \mathcal{N}\left(w\mu,w^2 \sigma^2\right).\]

Laplace’s Idea

A Probabilistic Process

Set the mean of Gaussian to be a function.

\[p\left(y_i|x_i\right)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{\left(y_i-f\left(x_i\right)\right)^{2}}{2\sigma^2}\right).\]

This gives us a ‘noisy function’.

This is known as a stochastic process.

Height as a Function of Weight

In the standard Gaussian, parameterized by mean and variance, make the mean a linear function of an input.

This leads to a regression model. \[ \begin{align*} y_i=&f\left(x_i\right)+\epsilon_i,\\ \epsilon_i \sim & \mathcal{N}\left(0,\sigma^2\right). \end{align*} \]

Assume \(y_i\) is height and \(x_i\) is weight.

Data Point Likelihood

Likelihood of an individual data point \[ p\left(y_i|x_i,m,c\right)=\frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(-\frac{\left(y_i-mx_i-c\right)^{2}}{2\sigma^2}\right). \] Parameters are gradient, \(m\), offset, \(c\) of the function and noise variance \(\sigma^2\).

Data Set Likelihood

If the noise, \(\epsilon_i\) is sampled independently for each data point. Each data point is independent (given \(m\) and \(c\)). For independent variables: \[ p(\mathbf{ y}) = \prod_{i=1}^np(y_i) \] \[ p(\mathbf{ y}|\mathbf{ x}, m, c) = \prod_{i=1}^np(y_i|x_i, m, c) \]

For Gaussian

i.i.d. assumption \[ p(\mathbf{ y}|\mathbf{ x}, m, c) = \prod_{i=1}^n\frac{1}{\sqrt{2\pi \sigma^2}}\exp \left(-\frac{\left(y_i- mx_i-c\right)^{2}}{2\sigma^2}\right). \] \[ p(\mathbf{ y}|\mathbf{ x}, m, c) = \frac{1}{\left(2\pi \sigma^2\right)^{\frac{n}{2}}}\exp\left(-\frac{\sum_{i=1}^n\left(y_i-mx_i-c\right)^{2}}{2\sigma^2}\right). \]

Log Likelihood Function

  • Normally work with the log likelihood: \[ L(m,c,\sigma^{2})=-\frac{n}{2}\log 2\pi -\frac{n}{2}\log \sigma^2 -\sum_{i=1}^{n}\frac{\left(y_i-mx_i-c\right)^{2}}{2\sigma^2}. \]

Consistency of Maximum Likelihood

  • If data was really generated according to probability we specified.
  • Correct parameters will be recovered in limit as \(n\rightarrow \infty\).
  • This can be proven through sample based approximations (law of large numbers) of “KL divergences”.
  • Mainstay of classical statistics (Wasserman, 2003).

Probabilistic Interpretation of the Error Function

  • Probabilistic Interpretation for Error Function is Negative Log Likelihood.
  • Minimizing error function is equivalent to maximizing log likelihood.
  • Maximizing log likelihood is equivalent to maximizing the likelihood because \(\log\) is monotonic.
  • Probabilistic interpretation: Minimizing error function is equivalent to maximum likelihood with respect to parameters.

Error Function

  • Negative log likelihood is the error function leading to an error function \[E(m,c,\sigma^{2})=\frac{n}{2}\log \sigma^2+\frac{1}{2\sigma^2}\sum _{i=1}^{n}\left(y_i-mx_i-c\right)^{2}.\]
  • Learning proceeds by minimizing this error function for the data set provided.

Sum of Squares Error

  • Ignoring terms which don’t depend on \(m\) and \(c\) gives \[E(m, c) \propto \sum_{i=1}^n(y_i - f(x_i))^2\] where \(f(x_i) = mx_i + c\).
  • This is known as the sum of squares error function.
  • Commonly used and is closely associated with the Gaussian likelihood.

Reminder

  • Two functions involved:
    • Prediction function: \(f(x_i)\)
    • Error, or Objective function: \(E(m, c)\)
  • Error function depends on parameters through prediction function.

Mathematical Interpretation

  • What is the mathematical interpretation?
  • There is a cost function.
    • It expresses mismatch between your prediction and reality. \[ E(m, c)=\sum_{i=1}^n\left(y_i - mx_i-c\right)^2 \]
    • This is known as the sum of squares error.

Legendre

Running Example: Olympic Marathons

Maximum Likelihood: Iterative Solution

\[ E(m, c) = \sum_{i=1}^n(y_i-mx_i-c)^2 \]

Coordinate Descent

Learning is Optimization

  • Learning is minimization of the cost function.
  • At the minima the gradient is zero.
  • Coordinate ascent, find gradient in each coordinate and set to zero. \[\frac{\text{d}E(c)}{\text{d}c} = -2\sum_{i=1}^n\left(y_i- m x_i - c \right)\] \[0 = -2\sum_{i=1}^n\left(y_i- mx_i - c \right)\]

Learning is Optimization

  • Fixed point equations \[0 = -2\sum_{i=1}^ny_i +2\sum_{i=1}^nm x_i +2n c\] \[c = \frac{\sum_{i=1}^n\left(y_i - mx_i\right)}{n}\]

Learning is Optimization

  • Learning is minimization of the cost function.
  • At the minima the gradient is zero.
  • Coordinate ascent, find gradient in each coordinate and set to zero. \[\frac{\text{d}E(m)}{\text{d}m} = -2\sum_{i=1}^nx_i\left(y_i- m x_i - c \right)\] \[0 = -2\sum_{i=1}^nx_i \left(y_i-m x_i - c \right)\]

Learning is Optimization

  • Fixed point equations \[0 = -2\sum_{i=1}^nx_iy_i+2\sum_{i=1}^nm x_i^2+2\sum_{i=1}^ncx_i\] \[m = \frac{\sum_{i=1}^n\left(y_i -c\right)x_i}{\sum_{i=1}^nx_i^2}\]

\[m^* = \frac{\sum_{i=1}^n(y_i - c)x_i}{\sum_{i=1}^nx_i^2}\]

Fixed Point Updates

Worked example.

\[ \begin{aligned} c^{*}=&\frac{\sum _{i=1}^{n}\left(y_i-m^{*}x_i\right)}{n},\\ m^{*}=&\frac{\sum _{i=1}^{n}x_i\left(y_i-c^{*}\right)}{\sum _{i=1}^{n}x_i^{2}},\\ \left.\sigma^2\right.^{*}=&\frac{\sum _{i=1}^{n}\left(y_i-m^{*}x_i-c^{*}\right)^{2}}{n} \end{aligned} \]

Important Concepts Not Covered

  • Other optimization methods:
    • Second order methods, conjugate gradient, quasi-Newton and Newton.
  • Effective heuristics such as momentum.
  • Local vs global solutions.

Multi-dimensional Inputs

  • Multivariate functions involve more than one input.
  • Height might be a function of weight and gender.
  • There could be other contributory factors.
  • Place these factors in a feature vector \(\mathbf{ x}_i\).
  • Linear function is now defined as \[f(\mathbf{ x}_i) = \sum_{j=1}^p w_j x_{i, j} + c\]

Vector Notation

  • Write in vector notation, \[f(\mathbf{ x}_i) = \mathbf{ w}^\top \mathbf{ x}_i + c\]
  • Can absorb \(c\) into \(\mathbf{ w}\) by assuming extra input \(x_0\) which is always 1. \[f(\mathbf{ x}_i) = \mathbf{ w}^\top \mathbf{ x}_i\]

Objective Functions and Regression

  • Classification: map feature to class label.

  • Regression: map feature to real value our prediction function is

    \[f(x_i) = mx_i + c\]

  • Need an algorithm to fit it.

  • Least squares: minimize an error.

\[E(m, c) = \sum_{i=1}^n(y_i * f(x_i))^2\]

Regression

  • Create an artifical data set.

We now need to decide on a true value for \(m\) and a true value for \(c\) to use for generating the data.

We can use these values to create our artificial data. The formula \[y_i = mx_i + c\] is translated to code as follows:

Plot of Data

We can now plot the artifical data we’ve created.

These points lie exactly on a straight line, that’s not very realistic, let’s corrupt them with a bit of Gaussian ‘noise’.

Noise Corrupted Plot

Contour Plot of Error Function

  • Visualise the error function surface, create vectors of values.

  • create a grid of values to evaluate the error function in 2D.

  • compute the error function at each combination of \(c\) and \(m\).

Contour Plot of Error

  • We can now make a contour plot.

Steepest Descent

  • Minimize the sum of squares error function.
  • One way of doing that is gradient descent.
  • Initialize with a guess for \(m\) and \(c\)
  • update that guess by subtracting a portion of the gradient from the guess.
  • Like walking down a hill in the steepest direction of the hill to get to the bottom.

Algorithm

  • We start with a guess for \(m\) and \(c\).

Offset Gradient

  • Now we need to compute the gradient of the error function, firstly with respect to \(c\), \[ \frac{\text{d}E(m, c)}{\text{d} c} = -2\sum_{i=1}^n(y_i - mx_i - c) \]

  • This is computed in python as follows

Deriving the Gradient

To see how the gradient was derived, first note that the \(c\) appears in every term in the sum. So we are just differentiating \((y_i - mx_i - c)^2\) for each term in the sum. The gradient of this term with respect to \(c\) is simply the gradient of the outer quadratic, multiplied by the gradient with respect to \(c\) of the part inside the quadratic. The gradient of a quadratic is two times the argument of the quadratic, and the gradient of the inside linear term is just minus one. This is true for all terms in the sum, so we are left with the sum in the gradient.

Slope Gradient

The gradient with respect tom \(m\) is similar, but now the gradient of the quadratic’s argument is \(-x_i\) so the gradient with respect to \(m\) is

\[\frac{\text{d}E(m, c)}{\text{d} m} = -2\sum_{i=1}^nx_i(y_i - mx_i - c)\]

which can be implemented in python (numpy) as

Update Equations

  • Now we have gradients with respect to \(m\) and \(c\).
  • Can update our inital guesses for \(m\) and \(c\) using the gradient.
  • We don’t want to just subtract the gradient from \(m\) and \(c\),
  • We need to take a small step in the gradient direction.
  • Otherwise we might overshoot the minimum.
  • We want to follow the gradient to get to the minimum, the gradient changes all the time.

Move in Direction of Gradient

Update Equations

  • The step size has already been introduced, it’s again known as the learning rate and is denoted by \(\eta\). \[ c_\text{new}\leftarrow c_{\text{old}} - \eta\frac{\text{d}E(m, c)}{\text{d}c} \]

  • gives us an update for our estimate of \(c\) (which in the code we’ve been calling c_star to represent a common way of writing a parameter estimate, \(c^*\)) and \[ m_\text{new} \leftarrow m_{\text{old}} - \eta\frac{\text{d}E(m, c)}{\text{d}m} \]

  • Giving us an update for \(m\).

Update Code

  • These updates can be coded as

Iterating Updates

  • Fit model by descending gradient.

Gradient Descent Algorithm

Stochastic Gradient Descent

  • If \(n\) is small, gradient descent is fine.
  • But sometimes (e.g. on the internet \(n\) could be a billion.
  • Stochastic gradient descent is more similar to perceptron.
  • Look at gradient of one data point at a time rather than summing across all data points)
  • This gives a stochastic estimate of gradient.

Stochastic Gradient Descent

  • The real gradient with respect to \(m\) is given by

    \[\frac{\text{d}E(m, c)}{\text{d} m} = -2\sum_{i=1}^nx_i(y_i - mx_i - c)\]

    but it has \(n\) terms in the sum. Substituting in the gradient we can see that the full update is of the form

    \[m_\text{new} \leftarrow m_\text{old} + 2\eta\left[x_1 (y_1 - m_\text{old}x_1 - c_\text{old}) + (x_2 (y_2 - m_\text{old}x_2 - c_\text{old}) + \dots + (x_n (y_n - m_\text{old}x_n - c_\text{old})\right]\]

    This could be split up into lots of individual updates \[m_1 \leftarrow m_\text{old} + 2\eta\left[x_1 (y_1 - m_\text{old}x_1 - c_\text{old})\right]\] \[m_2 \leftarrow m_1 + 2\eta\left[x_2 (y_2 - m_\text{old}x_2 - c_\text{old})\right]\] \[m_3 \leftarrow m_2 + 2\eta \left[\dots\right]\] \[m_n \leftarrow m_{n-1} + 2\eta\left[x_n (y_n - m_\text{old}x_n - c_\text{old})\right]\]

which would lead to the same final update.

Updating \(c\) and \(m\)

  • In the sum we don’t \(m\) and \(c\) we use for computing the gradient term at each update.
  • In stochastic gradient descent we do change them.
  • This means it’s not quite the same as steepest desceint.
  • But we can present each data point in a random order, like we did for the perceptron.
  • This makes the algorithm suitable for large scale web use (recently this domain is know as ‘Big Data’) and algorithms like this are widely used by Google, Microsoft, Amazon, Twitter and Facebook.

Stochastic Gradient Descent

  • Or more accurate, since the data is normally presented in a random order we just can write \[ m_\text{new} = m_\text{old} + 2\eta\left[x_i (y_i - m_\text{old}x_i - c_\text{old})\right] \]

SGD for Linear Regression

Putting it all together in an algorithm, we can do stochastic gradient descent for our regression data.

Reflection on Linear Regression and Supervised Learning

Think about:

  1. What effect does the learning rate have in the optimization? What’s the effect of making it too small, what’s the effect of making it too big? Do you get the same result for both stochastic and steepest gradient descent?

  2. The stochastic gradient descent doesn’t help very much for such a small data set. It’s real advantage comes when there are many, you’ll see this in the lab.

Log Likelihood for Multivariate Regression

The likelihood of a single data point is

\[p\left(y_i|x_i\right)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{\left(y_i-\mathbf{ w}^{\top}\mathbf{ x}_i\right)^{2}}{2\sigma^2}\right).\]

Leading to a log likelihood for the data set of

\[L(\mathbf{ w},\sigma^2)= -\frac{n}{2}\log \sigma^2-\frac{n}{2}\log 2\pi -\frac{\sum_{i=1}^{n}\left(y_i-\mathbf{ w}^{\top}\mathbf{ x}_i\right)^{2}}{2\sigma^2}.\]

Error Function

And a corresponding error function of \[E(\mathbf{ w},\sigma^2)=\frac{n}{2}\log\sigma^2 + \frac{\sum_{i=1}^{n}\left(y_i-\mathbf{ w}^{\top}\mathbf{ x}_i\right)^{2}}{2\sigma^2}.\]

Quadratic Loss

\[ E(\mathbf{ w}) = \sum_{i=1}^n\left(y_i - f(\mathbf{ x}_i, \mathbf{ w})\right)^2 \]

Linear Model

\[ E(\mathbf{ w}) = \sum_{i=1}^n\left(y_i - \mathbf{ w}^\top \mathbf{ x}_i\right)^2 \]

Bracket Expansion

\[ \begin{align*} E(\mathbf{ w},\sigma^2) = & \frac{n}{2}\log \sigma^2 + \frac{1}{2\sigma^2}\sum _{i=1}^{n}y_i^{2}-\frac{1}{\sigma^2}\sum _{i=1}^{n}y_i\mathbf{ w}^{\top}\mathbf{ x}_i\\&+\frac{1}{2\sigma^2}\sum _{i=1}^{n}\mathbf{ w}^{\top}\mathbf{ x}_i\mathbf{ x}_i^{\top}\mathbf{ w} +\text{const}.\\ = & \frac{n}{2}\log \sigma^2 + \frac{1}{2\sigma^2}\sum _{i=1}^{n}y_i^{2}-\frac{1}{\sigma^2} \mathbf{ w}^\top\sum_{i=1}^{n}\mathbf{ x}_iy_i\\&+\frac{1}{2\sigma^2} \mathbf{ w}^{\top}\left[\sum _{i=1}^{n}\mathbf{ x}_i\mathbf{ x}_i^{\top}\right]\mathbf{ w}+\text{const}. \end{align*} \]

Design Matrix

\[ \mathbf{X} = \begin{bmatrix} \mathbf{ x}_1^\top \\\ \mathbf{ x}_2^\top \\\ \vdots \\\ \mathbf{ x}_n^\top \end{bmatrix} = \begin{bmatrix} 1 & x_1 \\\ 1 & x_2 \\\ \vdots & \vdots \\\ 1 & x_n \end{bmatrix} \]

Writing the Objective with Linear Algebra

\[ E(\mathbf{ w}) = \sum_{i=1}^n(y_i - f(\mathbf{ x}_i; \mathbf{ w}))^2, \]

\[ f(\mathbf{ x}_i; \mathbf{ w}) = \mathbf{ x}_i^\top \mathbf{ w}. \]

Stacking Vectors

\[ \mathbf{ f}= \begin{bmatrix}f_1\\f_2\\ \vdots \\ f_n\end{bmatrix}. \]

\[ E(\mathbf{ w}) = (\mathbf{ y}- \mathbf{ f})^\top(\mathbf{ y}- \mathbf{ f}) \]

\[ \mathbf{ f}= \mathbf{X}\mathbf{ w}. \]

\[ E(\mathbf{ w}) = (\mathbf{ y}- \mathbf{ f})^\top(\mathbf{ y}- \mathbf{ f}) \]

Objective Optimization

Multivariate Derivatives

  • We will need some multivariate calculus.
  • For now some simple multivariate differentiation: \[\frac{\text{d}{\mathbf{a}^{\top}}{\mathbf{ w}}}{\text{d}\mathbf{ w}}=\mathbf{a}\] and \[\frac{\mathbf{ w}^{\top}\mathbf{A}\mathbf{ w}}{\text{d}\mathbf{ w}}=\left(\mathbf{A}+\mathbf{A}^{\top}\right)\mathbf{ w}\] or if \(\mathbf{A}\) is symmetric (i.e. \(\mathbf{A}=\mathbf{A}^{\top}\)) \[\frac{\text{d}\mathbf{ w}^{\top}\mathbf{A}\mathbf{ w}}{\text{d}\mathbf{ w}}=2\mathbf{A}\mathbf{ w}.\]

Differentiate the Objective

Differentiating with respect to the vector \(\mathbf{ w}\) we obtain

\[ \frac{\partial L\left(\mathbf{ w},\sigma^2 \right)}{\partial \mathbf{ w}}=\frac{1}{\sigma^2} \sum _{i=1}^{n}\mathbf{ x}_i y_i-\frac{1}{\sigma^2} \left[\sum _{i=1}^{n}\mathbf{ x}_i\mathbf{ x}_i^{\top}\right]\mathbf{ w} \] Leading to \[ \mathbf{ w}^{*}=\left[\sum _{i=1}^{n}\mathbf{ x}_i\mathbf{ x}_i^{\top}\right]^{-1}\sum _{i=1}^{n}\mathbf{ x}_iy_i, \]

Differentiate the Objective

Rewrite in matrix notation: \[ \sum_{i=1}^{n}\mathbf{ x}_i\mathbf{ x}_i^\top = \mathbf{X}^\top \mathbf{X} \] \[ \sum_{i=1}^{n}\mathbf{ x}_iy_i = \mathbf{X}^\top \mathbf{ y} \]

Update Equation for Global Optimum

Update Equations

  • Solve the matrix equation for \(\mathbf{ w}\). \[\mathbf{X}^\top \mathbf{X}\mathbf{ w}= \mathbf{X}^\top \mathbf{ y}\]

  • The equation for \(\left.\sigma^2\right.^{*}\) may also be found \[\left.\sigma^2\right.^{{*}}=\frac{\sum_{i=1}^{n}\left(y_i-\left.\mathbf{ w}^{*}\right.^{\top}\mathbf{ x}_i\right)^{2}}{n}.\]

Movie Body Count Data

  • Data containing movie information (year, length, rating, genre, IMDB Rating).

Multivariate Regression on Movie Body Count Data

  • Regress from features Year, Body_Count, Length_Minutes to IMDB_Rating.

Residuals

Solution with QR Decomposition

\[ \mathbf{X}^\top \mathbf{X}\boldsymbol{\beta} = \mathbf{X}^\top \mathbf{ y} \] substitute \(\mathbf{X}= \mathbf{Q}{\mathbf{R}\) \[ (\mathbf{Q}\mathbf{R})^\top (\mathbf{Q}\mathbf{R})\boldsymbol{\beta} = (\mathbf{Q}\mathbf{R})^\top \mathbf{ y} \] \[ \mathbf{R}^\top (\mathbf{Q}^\top \mathbf{Q}) \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \mathbf{ y} \]

\[ \mathbf{R}^\top \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \mathbf{ y} \] \[ \mathbf{R} \boldsymbol{\beta} = \mathbf{Q}^\top \mathbf{ y} \]

  • More nummerically stable.
  • Avoids the intermediate computation of \(\mathbf{X}^\top\mathbf{X}\).

Thanks!

Further Reading

  • For fitting linear models: Section 1.1-1.2 of Rogers and Girolami (2011)

  • Section 1.2.5 up to equation 1.65 of Bishop (2006)

  • Section 1.3 for Matrix & Vector Review of Rogers and Girolami (2011)

References

Bishop, C.M., 2006. Pattern recognition and machine learning. springer.
Rogers, S., Girolami, M., 2011. A first course in machine learning. CRC Press.
Wasserman, L.A., 2003. All of statistics. springer, New York.