Eigenvalues and Eigenvectors

We can do all kinds of weird scalings with matrices, which we saw first here. For example, stretch the ‘horizontal’ vector (1, 0) to, say, (2, 0) and then stretch and move the ‘vertical’ vector (0, 1) to, say, (–3, 5). Our transformation matrix, then, is

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\)

What will this do to a position vector (a point) at, say, (1, 1)? We multiply the matrix and the vector to find out:

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{1}\end{bmatrix} = 1\begin{bmatrix}\mathtt{2}\\\mathtt{0}\end{bmatrix} + 1\begin{bmatrix}\mathtt{-3}\\\mathtt{\,\,\,\,5}\end{bmatrix} = \begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,5}\end{bmatrix}\)

The vector representing point A in this case clearly changed directions as a result of the transformation, in addition to getting stretched. However, a question that doesn’t seem worth asking now but will later is whether there are any vectors that don’t change direction as a result of the transformation—either staying the same or just getting scaled. That is, are there vectors (\(\mathtt{r_1, r_2}\)), such that (using lambda, \(\mathtt{\lambda}\), as a constant to be cool again):

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{\lambda}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\)?

A good guess would be that any ‘horizontal’ vector would not change direction, since the original (1, 0) was only scaled to (2, 0). Anyway, remembering that the identity matrix represents the do-nothing transformation, we can also write the above equation like this:

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{\lambda}\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \begin{bmatrix}\mathtt{\lambda} & \mathtt{0}\\\mathtt{0} & \mathtt{\lambda}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\)

And although we haven’t yet talked about the idea that you can combine transformation matrices (add and subtract them), let me just say now that you can do this. So, we can manipulate the sides of the equation above (the far left and far right) and rewrite using the Distributive Property in reverse to get:

\(\left(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix} – \begin{bmatrix}\mathtt{\lambda} & \mathtt{0}\\\mathtt{0} & \mathtt{\lambda}\end{bmatrix}\right)\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0} \rightarrow \begin{bmatrix}\mathtt{2\,-\,\lambda} & \mathtt{-3}\\\mathtt{0} & \mathtt{5\,-\,\lambda}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0}\)

The vector (\(\mathtt{r_1}, \mathtt{r_2}\)) could, of course, always be the zero vector. But we ignore that solution and assume that it represents some non-zero vector. Given this assumption, the transformation matrix that has the lambdas subtracted from integers above must have a determinant of 0. We haven’t talked about that last point yet either, but it should make some sense even now. If a transformation matrix takes a non-zero vector (a one-dimensional ray, so to speak) to zero, no positive areas will survive. If you take the side of a square and reduce one of its dimensions to zero, it becomes a one-dimensional object with no area.

Getting the Eigenvalues and Eigenvectors

Moving on, we know how to calculate the determinant, and we know that the determinant must be 0. So, \(\mathtt{(2 – \lambda)(5 – \lambda) = 0}\). The solutions here are \(\mathtt{\lambda = 2}\) and \(\mathtt{\lambda = 5}\). These two numbers are the eigenvalues. To get the eigenvectors, plug in each of the eigenvalues into that transformation matrix above and solve for the vector: \[\begin{bmatrix}\mathtt{2\,-\,2} & \mathtt{-3}\\\mathtt{0} & \mathtt{5\,-\,2}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0}\]

We have to kind of fudge a solution to that system of equations, but in the end we wind up with the result that one of the eigenvectors will be any vector of the form \(\mathtt{(c, 0)}\), where c represents any number. This confirms our earlier intuition that one of the vectors that will not change directions will be any ‘horizontal’ vector. The eigenvalue tells us that any vector of this form will be stretched by a factor of 2 in the transformation.

A similar process with the eigenvalue of 5 results in an eigenvector of the form \(\mathtt{(c, -c)}\). Any vector of this form will not change its direction as a result of the transformation, but will be scaled by a factor of 5.

Check out and play with this interactive to watch how the transformation matrix works and to watch how the eigenvectors appear in the transformation. Be sure to check out the video linked at the top too!

Published by

Josh Fisher

Instructional designer, software development in K-12 mathematics education.