Least Squares with Linear Algebra

In brief, linear regression is about finding the line of best fit to a data set. If you’re looking for the linear algebra way of doing this, you will most likely find it searching for the term least squares.

In the basic scenario, you’ve got some two-dimensional data, \(\mathtt{(x, y)}\) coordinates, and you want to find the equation for a straight line that is as close as possible to each point. Such a scenario is shown below, though here the line has already been graphed and the equation for the line of best fit displayed. (But feel free to change the data in the table or move the points around to see how the line of best fit changes.)

xy
108.04
86.95
137.58
98.81
118.33
149.96
67.24
44.26
1210.84
74.82
55.68

So, let’s imagine that we haven’t found this line yet. We know what we are looking for is a line of the form \(\mathtt{y=mx+b}\). In linear algebra terms, we want the vector \(\mathtt{y}\) (11 rows, 1 column) to equal the vector \(\mathtt{x}\) (11 rows, 1 column) times a slope vector \(\mathtt{m}\) (1 row, 2 columns) plus an intercept vector \(\mathtt{b}\) (11 rows, 1 column).

The first problem with this that we have to fix is that \(\mathtt{x}\) the 11 × 1 vector needs to be \(\mathtt{X}\) the 11 × 2 matrix so that we get our equations right. So we’ll pad \(\mathtt{x}\) with some 1s and then we’ll be able to call it \(\mathtt{X}\). (We can pad on either side, left or right, just remembering to interchange \(\mathtt{b}\) and \(\mathtt{m}\) to keep the equations straight.) The second problem we can fix is that we don’t need a separate intercept vector and a separate slope vector. We can combine things so that we form an equivalent matrix equation that means the same thing as \(\mathtt{y=mx+b}\). We need the equation to look like this: \[\begin{bmatrix}\mathtt{8.04}\\\mathtt{6.95}\\\mathtt{7.58}\\\mathtt{8.81}\\\mathtt{8.33}\\\mathtt{9.96}\\\mathtt{7.24}\\\mathtt{4.26}\\\mathtt{10.84}\\\mathtt{4.82}\\\mathtt{5.68}\end{bmatrix} = \begin{bmatrix}\mathtt{1}&\mathtt{10}\\\mathtt{1}&\mathtt{8}\\\mathtt{1}&\mathtt{13}\\\mathtt{1}&\mathtt{9}\\\mathtt{1}&\mathtt{11}\\\mathtt{1}&\mathtt{14}\\\mathtt{1}&\mathtt{6}\\\mathtt{1}&\mathtt{4}\\\mathtt{1}&\mathtt{12}\\\mathtt{1}&\mathtt{7}\\\mathtt{1}&\mathtt{5}\end{bmatrix}\begin{bmatrix}\mathtt{b}\\\mathtt{m}\end{bmatrix}\]

Multiply the matrix and vector on the right side of that equation, and you get, for the first equation, \(\mathtt{8.04=b+(m)(10)}\). That’s equivalent to \(\mathtt{y=mx+b}\) for the first \(\mathtt{(x, y)}\) data point \(\mathtt{(10, 8.04)}\). The matrix and vector setup ensures that all the equations for all the points are of the correct form.

In linear algebra terms, we have rewritten the equation to be \(\mathtt{y=Xv}\), where \(\mathtt{y}\) is an 11 × 1 vector, \(\mathtt{X}\) is an 11 × 2 matrix (padded with some 1s), and \(\mathtt{v}\) is a 2 × 1 vector which contains the unknown slope \(\mathtt{m}\) of the best fit line and the unknown intercept \(\mathtt{b}\).

It’s an Approximation

At this point in the explanation, it’s important to realize that we will make another shift. The first was from the real data to the matrix algebra setup. We shift again below, away from that setup, per se, and toward just finding out what that unknown \(\mathtt{v}\) is.

As an analogy, the system shown below features a 3d vector (2, 0, –2) which does not live in the same plane as the one formed by the two other column vectors of the matrix. Thus, there is no solution \(\mathtt{(j, k)}\). \[\begin{bmatrix}\mathtt{1}&\mathtt{\,\,\,\,1}\\\mathtt{1}&\mathtt{-3}\\\mathtt{1}&\mathtt{\,\,\,\,1}\end{bmatrix}\begin{bmatrix}\mathtt{j}\\\mathtt{k}\end{bmatrix}=\begin{bmatrix}\mathtt{\,\,\,\,2}\\\mathtt{\,\,\,\,0}\\\mathtt{-2}\end{bmatrix}\longleftarrow\text{no solutions}\]

Similarly, the columns of our \(\mathtt{X}\) matrix form a plane in 11-dimensional space. In order for \(\mathtt{v}\) to be a solution to our original matrix equation above, \(\mathtt{y}\) has to live in this plane too. But we already know that it doesn’t. If it did, the points would lie along some line. It’s true that \(\mathtt{y}\) is an 11-dimensional vector, just like each column of \(\mathtt{X}\), but \(\mathtt{y}\) doesn’t live in the same plane.

The closest approximation we can get to \(\mathtt{y}\) in the plane of \(\mathtt{X}\) is \(\mathtt{Xv=p}\), where \(\mathtt{p}\) is the projection of \(\mathtt{y}\) onto the plane (another thing we’ll have to come back to). The vector \(\mathtt{q}\) connecting \(\mathtt{y}\) with its projection \(\mathtt{p}\) is perpendicular to \(\mathtt{p}\) (and, thus, to the plane).

Now we can write some more equations. For example, we know we want \(\mathtt{Xv=p}\), but it’s also true that \(\mathtt{p+q=y}\). And we can do some manipulation to show that all the column vectors of \(\mathtt{X}\) are perpendicular to \(\mathtt{q}\). Assuming for a second that our \(\mathtt{X}\) is a 2 × 2 matrix, we write \(\mathtt{X}\) as a 3D matrix, transpose it (so that multiplication is defined), and multiply it by \(\mathtt{q}\). The same basic idea applies to our original \(\mathtt{X}\) (11 × 2) matrix. \[\begin{bmatrix}\mathtt{x_{11}}&\mathtt{x_{12}}\\\mathtt{x_{21}}&\mathtt{x_{22}}\end{bmatrix}\rightarrow\begin{bmatrix}\mathtt{x_{11}}&\mathtt{x_{12}}\\\mathtt{x_{21}}&\mathtt{x_{22}}\\\mathtt{0}&\mathtt{0}\end{bmatrix}\rightarrow\begin{bmatrix}\mathtt{x_{11}}&\mathtt{x_{21}}&\mathtt{0}\\\mathtt{x_{12}}&\mathtt{x_{22}}&\mathtt{0}\end{bmatrix}\]\[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\begin{bmatrix}\mathtt{x_{11}}&\mathtt{x_{21}}&\mathtt{0}\\\mathtt{x_{12}}&\mathtt{x_{22}}&\mathtt{0}\end{bmatrix}\begin{bmatrix}\mathtt{0}\\\mathtt{0}\\\mathtt{q_3}\end{bmatrix}=\begin{bmatrix}\mathtt{0}\\\mathtt{0}\end{bmatrix}\] This gives us a key equation: \(\mathtt{X^{T}q=0}\).

Since \(\mathtt{q=y-p}\), substituting for \(\mathtt{q}\) gets us \(\mathtt{X^{T}(y-p)=0}\). Then since \(\mathtt{Xv=p}\), substituting for \(\mathtt{p}\) brings us to \(\mathtt{X^{T}(y-Xv)=0}\). Distribute to get \(\mathtt{X^{T}y-X^{T}Xv=0}\), which means \(\mathtt{X^{T}y=X^{T}Xv}\). Multiply both sides on the left by the inverse of \(\mathtt{X^{T}X}\), and the vector \(\mathtt{v}\) that we’re after, then, is \[\mathtt{v=(X^{T}X)^{-1}X^{T}y}\]

That formula gives us the slope and intercept of our best fit line. Below is one way the least squares can be calculated with a little Python. The calculation I used for the above interactive to get the best fit line is way more complicated. If I had known about the linear algebra way when I made it, I would have definitely gone with that instead.

Published by

Josh Fisher

Instructional designer, software development in K-12 mathematics education.