Combining Matrix Transformations

Something that stands out in my mind as I have learned more linear algebra recently is how much more sane it feels to do a lot of forward thinking before getting into the backward “solving” thinking—to, for example, create a bunch of linear transformations and strengthen my ability to do stuff with the mathematics before throwing a wrench in the works and having me wonder what would happen if I didn’t know the starting vectors.

So, we’ll continue that forward thinking here by looking at the effect of combining transformations. Or, if we think about a 2 × 2 matrix as representing a linear transformation, then we’ll look at combining matrices.

How about this one, then? This is a transformation in which the (1, 0) basis vector goes to (1, 1 third) and the (0, 1) basis vector goes to (–2, 1). You can see the effect this transformation has on the unshaded triangle (producing the shaded triangle).

Before we combine this with another transformation, notice that the horizontal base of the original triangle, which was parallel to the horizontal basis vector, appears to be, in its transformed form, now parallel to the transformed horizontal basis vector. Let’s test this. \[\begin{bmatrix}\mathtt{1} & \mathtt{-2}\\\mathtt{\frac{1}{3}} & \mathtt{\,\,\,\,1}\end{bmatrix}\begin{bmatrix}\mathtt{2}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{-2}\\\mathtt{2\frac{2}{3}}\end{bmatrix} \quad\text{and}\quad\begin{bmatrix}\mathtt{1} & \mathtt{-2}\\\mathtt{\frac{1}{3}} & \mathtt{\,\,\,\,1}\end{bmatrix}\begin{bmatrix}\mathtt{4}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{0}\\\mathtt{3\frac{1}{3}}\end{bmatrix}\]

The slope of the originally horizontal but now transformed base is, then, \(\mathtt{\frac{3\frac{1}{3}\, – \,2\frac{2}{3}}{0\,-\,(-2)} = \frac{\frac{2}{3}}{2} = \frac{1}{3}}\), which is the same slope as the transformed horizontal basis vector (1, 1 third).

Transform the Transformation

Okay, so let’s transform the transformation, as shown at the right, under this matrix: \[\begin{bmatrix}\mathtt{-1} & \mathtt{0}\\\mathtt{\,\,\,\,0} & \mathtt{\frac{1}{2}}\end{bmatrix}\]

Is it possible to multiply the two matrices to get our final (purple) transformation? Here’s how to multiply the two matrices and the result:

You should be able to check that, yes indeed, the last matrix takes the original triangle to the purple triangle. You should also be able to test that reversing the order of the multiplication of the two matrices changes the answer completely, so matrix multiplication is not commutative. Notice also that the determinant is approximately \(\mathtt{-0.8333…}\). This tells us that the area of the new triangle is 5 sixths that of the original. And the negative indicates the reflection the triangle underwent. The determinant of the first matrix is –0.5, and that of the second is 5 thirds. Multiply those together and you get the determinant of the combined transformations matrix.

Inverse of a Scaling Matrix

Well, we should be pretty comfortable moving things around with vectors and matrices. We’re good on some of the forward thinking. We can think of a matrix \(\mathtt{A}\) as a mapping of one vector (or an entire set of vectors) to another vector (or to another set of vectors). Then we can think of \(\mathtt{B}\) as the matrix which undoes the mapping of \(\mathtt{A}\). So, \(\mathtt{B}\) is the inverse of \(\mathtt{A}\).

How do we figure out what \(\mathtt{A}\) and \(\mathtt{B}\) are?

\[\mathtt{A}\color{green}{\begin{bmatrix}\mathtt{\,\,3\,\,} \\\mathtt{\,\,3\,\,} \end{bmatrix}} \,= \color{green}{\begin{bmatrix}\mathtt{-4} \\\mathtt{\,\,\,\,1} \end{bmatrix}}\]
\[\mathtt{B}\color{green}{\begin{bmatrix}\mathtt{-4} \\\mathtt{\,\,\,\,\,1} \end{bmatrix}} = \color{green}{\begin{bmatrix}\mathtt{\,\,3\,\,} \\\mathtt{\,\,3\,\,} \end{bmatrix}}\]

Eyeballing Is a Lost Art in Mathematics Education

It is! We can figure out the matrix \(\mathtt{A}\) without doing any calculations. Break down the movement of the green point into horizontal and vertical components. Horizontally, the green point is reflected across the “y-axis” and then stretched another third of its distance from the y-axis. This corresponds to multiplying the horizontal component of the green point by –1.333…. For the vertical component, the green point starts at 3 and ends at 1, so the vertical component is dilated by a factor of 0.333…. We can see both of these transformations shown in the change in the sizes and directions of the blue and orange basis vectors. So, our transformation matrix \(\mathtt{A}\) is shown below. When we multiply the vector (3, 3) by this transformation matrix, we get the point, or position vector, (–4, 1). \[\begin{bmatrix}\mathtt{-\frac{4}{3}} & \mathtt{0}\\\mathtt{\,\,\,\,0} & \mathtt{\frac{1}{3}}\end{bmatrix}\begin{bmatrix}\mathtt{3}\\\mathtt{3}\end{bmatrix} = \begin{bmatrix}\mathtt{-4}\\\mathtt{\,\,\,\,1}\end{bmatrix}\]

You can see that \(\mathtt{A}\) is a scaling matrix, which is why it can be eyeballed, more or less. And what is the inverse matrix? We can use similar reasoning and work backward from (–4, 1) to (3, 3). For the horizontal component, reflect across the y-axis and scale down by three fourths. For the vertical component, multiply by 3. So, the inverse matrix, \(\mathtt{B}\), when multiplied to the vector, produces the correct starting vector: \[\begin{bmatrix}\mathtt{-\frac{3}{4}} & \mathtt{0}\\\mathtt{\,\,\,\,0} & \mathtt{3}\end{bmatrix}\begin{bmatrix}\mathtt{-4}\\\mathtt{\,\,\,\,1}\end{bmatrix} = \begin{bmatrix}\mathtt{3}\\\mathtt{3}\end{bmatrix}\]

You’ll notice that we use the reciprocals of the non-zero scaling numbers in the original matrix to produce the inverse matrix. You can do the calculations with the other points on the animation above to test it out.

Incidentally, we can also eyeball the eigenvectors—those vectors which don’t change direction but are merely scaled as a result of the transformations—and even the eigenvalues (the scale factor of each transformed eigenvector). The vector (1, 0) is an eigenvector, with an eigenvalue of \(\mathtt{-\frac{4}{3}}\) for the original transformation and an eigenvalue of –0.75 for the inverse, and the vector (0, 1) is an eigenvector, with an eigenvalue of \(\mathtt{\frac{1}{3}}\) for the original transformation and an eigenvalue of 3 for the inverse.

Rotations, Reflections, Scalings

I just wanted to pause briefly to showcase how some of the linear transformations we have been looking into can be represented in computerese (or at least one version of computerese). You can click on the pencil icon and then on the matrix_transform.js file in the trinket below and look for the word matrix. Change the numbers in those lines to check the effects on the transformations. You can get some fairly wild stuff.

By the way, trinket is an incredibly beautiful product if you like tinkering with all kinds of code. Grab a free account and share your work!

For this demo, I stuck with simple transformations centered at the origin of a coordinate system (so to speak). As you can imagine, there are much more elaborate things you can do when you combine transformations and move the center point around.

Reflections and Foot of a Point

So, we did rotations with matrices. Now what about reflections? The basic reflections—of the identity matrix, say—aren’t worth mentioning at the moment. The more puzzling reflections—those about a line that is not horizontal or vertical—are worth looking at.

The more complicated way, though, to do this we’ll save for another time. The simpler way involves something called the foot of the point. Back when we were working out the distance of a point to a line, naturally we were thinking about the perpendicular distance of that point from the line. And where that perpendicular distance to the point intersects the line is called the foot of the point.

This point is also the perpendicular bisector of \(\mathtt{\overline{rr’}}\), or the line segment connecting the point \(\mathtt{r}\) with its reflection across the line. So, if we can get the foot of the point we are reflecting, we can get the reflected point.

Determining the Foot of the Point

Let’s start with a different diagram. The line shown here can be represented by the following vector equation: \[\mathtt{p +\, α}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}\] What is the ordered pair for point r’, the reflection of point r across the line?

Let’s start by finding the location of q, the foot of the point. Since we know p (it’s [0, 4]), and we know that the line is described by the vector \(\mathtt{(2α, α)}\), what we need to know is the scalar that scales us from p to q. We’ll call that scalar t.

To get at the scalar t, we can equate two cosine equations. The equation on the left shows the cosine of β that we learned when we looked at the dot product. And the equation on the right shows the cosine of β as the simple adjacent over hypotenuse ratio: \[\mathtt{\text{cos(β)} = \frac{(q\,-\,p) \cdot (r\,-\,p)}{|q\,-\,p||r\,-\,p|} \quad\quad\quad\text{cos(β)} = \frac{|t(q\,-\,p)|}{|r\,-\,p|}}\]

When we set the two right-hand expressions equal to each other and solve for t, we get the scalar t. (The difference in points, q – p, is just the vector [2, 1] and r – p is just the vector [5, –1].) \[\mathtt{t = \frac{(q\,-\,p) \cdot (r\,-\,p)}{|q\,-\,p|^2}} \,\,\longrightarrow\,\, \mathtt{t =} \frac{\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix} \cdot \begin{bmatrix}\mathtt{\,\,\,\,5}\\\mathtt{-1}\end{bmatrix}}{5} \,\,\longrightarrow\,\,\mathtt{t = 1.8}\]

Using the equation for the line at the start of this section, we see that we can set \(\mathtt{α}\) equal to t to determine the location of point q. So, point q is at \[\begin{bmatrix}\mathtt{0}\\\mathtt{4}\end{bmatrix} \mathtt{+\,\,\, 1.8}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{3.6}\\\mathtt{5.8}\end{bmatrix}\]

The Midpoint and the Reflection

Now that we have found the location of point q, we can treat it as the midpoint of \(\mathtt{\overline{rr’}}\), or the line segment connecting the point \(\mathtt{r}\) with its reflection across the line.

This is yet another thing we haven’t covered, but the midpoint between \(\mathtt{r}\) and \(\mathtt{r’}\) is \(\mathtt{q = \frac{1}{2}(r + r’)}\). Thus, the equation for the reflection of r (r’) across the given line, when we have figured out the foot of the point q is \[\mathtt{r’ = 2q\,-\,r}\]

I have to say, this makes reflections seem like a lot of work anyway.

Eigenvalues and Eigenvectors

We can do all kinds of weird scalings with matrices, which we saw first here. For example, stretch the ‘horizontal’ vector (1, 0) to, say, (2, 0) and then stretch and move the ‘vertical’ vector (0, 1) to, say, (–3, 5). Our transformation matrix, then, is

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\)

What will this do to a position vector (a point) at, say, (1, 1)? We multiply the matrix and the vector to find out:

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{1}\end{bmatrix} = 1\begin{bmatrix}\mathtt{2}\\\mathtt{0}\end{bmatrix} + 1\begin{bmatrix}\mathtt{-3}\\\mathtt{\,\,\,\,5}\end{bmatrix} = \begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,5}\end{bmatrix}\)

The vector representing point A in this case clearly changed directions as a result of the transformation, in addition to getting stretched. However, a question that doesn’t seem worth asking now but will later is whether there are any vectors that don’t change direction as a result of the transformation—either staying the same or just getting scaled. That is, are there vectors (\(\mathtt{r_1, r_2}\)), such that (using lambda, \(\mathtt{\lambda}\), as a constant to be cool again):

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{\lambda}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\)?

A good guess would be that any ‘horizontal’ vector would not change direction, since the original (1, 0) was only scaled to (2, 0). Anyway, remembering that the identity matrix represents the do-nothing transformation, we can also write the above equation like this:

\(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{\lambda}\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \begin{bmatrix}\mathtt{\lambda} & \mathtt{0}\\\mathtt{0} & \mathtt{\lambda}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\)

And although we haven’t yet talked about the idea that you can combine transformation matrices (add and subtract them), let me just say now that you can do this. So, we can manipulate the sides of the equation above (the far left and far right) and rewrite using the Distributive Property in reverse to get:

\(\left(\begin{bmatrix}\mathtt{2} & \mathtt{-3}\\\mathtt{0} & \mathtt{\,\,\,\,5}\end{bmatrix} – \begin{bmatrix}\mathtt{\lambda} & \mathtt{0}\\\mathtt{0} & \mathtt{\lambda}\end{bmatrix}\right)\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0} \rightarrow \begin{bmatrix}\mathtt{2\,-\,\lambda} & \mathtt{-3}\\\mathtt{0} & \mathtt{5\,-\,\lambda}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0}\)

The vector (\(\mathtt{r_1}, \mathtt{r_2}\)) could, of course, always be the zero vector. But we ignore that solution and assume that it represents some non-zero vector. Given this assumption, the transformation matrix that has the lambdas subtracted from integers above must have a determinant of 0. We haven’t talked about that last point yet either, but it should make some sense even now. If a transformation matrix takes a non-zero vector (a one-dimensional ray, so to speak) to zero, no positive areas will survive. If you take the side of a square and reduce one of its dimensions to zero, it becomes a one-dimensional object with no area.

Getting the Eigenvalues and Eigenvectors

Moving on, we know how to calculate the determinant, and we know that the determinant must be 0. So, \(\mathtt{(2 – \lambda)(5 – \lambda) = 0}\). The solutions here are \(\mathtt{\lambda = 2}\) and \(\mathtt{\lambda = 5}\). These two numbers are the eigenvalues. To get the eigenvectors, plug in each of the eigenvalues into that transformation matrix above and solve for the vector: \[\begin{bmatrix}\mathtt{2\,-\,2} & \mathtt{-3}\\\mathtt{0} & \mathtt{5\,-\,2}\end{bmatrix}\begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix} = \mathtt{0}\]

We have to kind of fudge a solution to that system of equations, but in the end we wind up with the result that one of the eigenvectors will be any vector of the form \(\mathtt{(c, 0)}\), where c represents any number. This confirms our earlier intuition that one of the vectors that will not change directions will be any ‘horizontal’ vector. The eigenvalue tells us that any vector of this form will be stretched by a factor of 2 in the transformation.

A similar process with the eigenvalue of 5 results in an eigenvector of the form \(\mathtt{(c, -c)}\). Any vector of this form will not change its direction as a result of the transformation, but will be scaled by a factor of 5.

Check out and play with this interactive to watch how the transformation matrix works and to watch how the eigenvectors appear in the transformation. Be sure to check out the video linked at the top too!

Sicklied O’er


My grandfather used to tell me a story about a young boy who was stuck in traffic with his family for hours because an 18-wheeler had got itself pinned under an overpass bridge ahead of them. The huge truck was wedged in so strongly and strangely that a flock of engineers had descended on the scene. They argued back and forth about their favorite physical and mathematical models that would unpin the trapped vehicle and release the miles-long stream of cars idling behind it on the freeway. This bickering went on for hours—until the boy got out of his car, walked up to the group of engineers, and shouted, “Why don’t you just let the air out the tires!”

It’s a nice story, precisely because it’s so rare and noticeable. We don’t notice unbroken strings of solved problems from experts, because that’s what we expect of experts—and, for the most part, what we get from them. We notice when they fail. And, because these failures are more noticeable than the far more boring and numerable successes, we fall prey to availability bias, and assume that expert failure occurs with much more regularity than it actually does. (In turn, we start to think that it’s maybe a good idea to keep students naive and, therefore, creative and open-minded rather than have them study things that other people have already figured out.) As Tom Nichols writes in The Death of Expertise:

At the root of all this is an inability among laypeople to understand that experts being wrong on occasion about certain issues is not the same thing as experts being wrong consistently on everything. The fact of the matter is that experts are more often right than wrong, especially on essential matters of fact. And yet the public constantly searches for the loopholes in expert knowledge that will allow them to disregard all expert advice they don’t like.

A 2008 study which put this folk notion of expert inflexibility to the test compared chess experts and novices, and measured the famous Einstellung effect in both groups across three experiments.

In the first experiment, the experts were given the board on the left and were instructed to find the shortest solution. The board on the left is designed to activate a motif familiar to chess experts (and thus activate Einstellung)—the smothered mate motif—which can be carried out using 5 moves. A shorter solution (3 moves) also exists, however.

If the experts failed to find the three-move solution, they were then given the board on the right. This board can be solved by the shorter three-move solution but not by the Einstellung motif of the smothered mate. The group of novices in the experiment were all given this second board (the one on the right) featuring the three-move mate solution without the Einstellung motif as well.


If knowledge corrupts insight, as it were, then the experts would, by and large, be fixated by the smothered mate sequence and miss the three-move solution. And this is indeed what happened—sort of. What the researchers found was that level of expertise correlated strongly with the results. Grandmasters (those with the highest levels of chess expertise) were not taken in by the Einstellung motif at all. Every one of them found the optimal three-move solution. However, experts with lower ratings, such as International Masters, Masters, and Candidate Masters, all experienced the Einstellung effect, with 50%, 18%, and 0%, respectively, finding the shorter solution on the first board, even though all of them found the optimal solution when it was presented on the second board, in the absence of the smothered mate motif.

The novices’ performance showed a positive correlation with rating also. Sixty-three percent of the highest rated (Class A) players in the novices group found the optimal solution on the right board, while 13% of Class B players and 0% of Class C players found the three-move solution. Thus, the Einstellung effect made International Masters experts perform like Class A players, Master players perform like Class B players, and Candidate Masters perform like Class C players.

Experiment 2 replicated the above finding in a slightly more naturalistic setting, and Experiment 3 did so with strategic Einstellungs instead of tactical ones.

Knowledge Is Essential for Cognitive Flexibility

While this study shows that Einstellung effects are powerful and observable in expert performance, it also demonstrates that the notion that expertise causes cognitive inflexibility is probably wrong.

The failure of the ordinary experts to find a better solution when they had already found a good one supports the view that experts can be vulnerable to inflexible thought patterns. But the performance of the super experts shows that ‘experts are inflexible’ would be the wrong conclusion to draw from this failure. The Einstellung effect is very powerful—the problem solving capability of our ordinary experts was reduced by about three SDs when a well-known solution was apparent to them. But the super experts, at least with the range of difficulty of problems used here, were less susceptible to the effect. Greater expertise led to greater flexibility, not less.

Knowledge, and the expertise inevitably linked to it, were also responsible for both forms of expert flexibility demonstrated in the experiments. The optimal solution was more likely to be noticed immediately, even before the nominally more familiar solution, among some super experts. Hence, expertise helped super experts avoid an Einstellung situation in the first place because they immediately found the optimal solution. Even when experts did not find the optimal solution immediately, expertise and knowledge were positively associated with the probability of finding the optimal solution after the non-optimal solution had been generated first. Finally, when knowledge discrepancy was minimized, as in the third experiment, super experts had sufficient resources to outperform their slightly weaker colleagues. In all three instances, knowledge was inextricably and positively related to expert flexibility. . . .

The training required to produce experts should not be seen as a source of potential problems but as a way to acquire the skill to deal effectively and flexibly with all the situations that can arise in the domain. Creativity is a consequence of expertise rather than expertise being a hindrance to creativity. To produce something novel and useful it is necessary first to master the previous knowledge in the domain. More knowledge empowers creativity rather than hurting it (e.g., Kulkarni & Simon, 1988; Simonton, 1997; Weisberg, 1993, 1999).

Rotations with Matrices

Okay, now let’s move stuff around with linear algebra. We’ll eventually do rotations, reflections, and maybe translations too, while mixing that up with stretchings and skewings and other things that matrices can do for us.

We learned here that a matrix gives us information about two arrows—the x-axis arrow and the y-axis arrow. What we really mean is that a 2 × 2 matrix represents a transformation of 2D space. This transformation is given by 2 column vectors—the 2 columns of the matrix. The identity matrix, as we saw previously, represents the do-nothing transformation:

\[\begin{bmatrix}\mathtt{\color{blue}{1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{0}} & \mathtt{\color{orange}{1}}\end{bmatrix} \leftarrow \begin{bmatrix}\mathtt{\color{blue}{1}}\\\mathtt{\color{blue}{0}}\end{bmatrix} \text{and} \begin{bmatrix}\mathtt{\color{orange}{0}}\\\mathtt{\color{orange}{1}}\end{bmatrix}\]

Another way to look at this matrix is that it tells us about the 2D space we’re looking at and how to interpret ANY vector in that space. So, what does the vector (1, 2) mean here? It means take 1 of the (1, 0) vectors and add 2 of the (0, 1) vectors.

\[\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\]

But what if we reflect the entire coordinate plane across the y-axis? That’s a new system, and it’s a system given by where the blue and orange vectors would be under that reflection:

\[\begin{bmatrix}\mathtt{\color{blue}{-1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{\,\,\,\,0}} & \mathtt{\color{orange}{1}}\end{bmatrix} \leftarrow \begin{bmatrix}\mathtt{\color{blue}{-1}}\\\mathtt{\color{blue}{\,\,\,\,0}}\end{bmatrix} \text{and} \begin{bmatrix}\mathtt{\color{orange}{0}}\\\mathtt{\color{orange}{1}}\end{bmatrix}\]

In that new system, we can guess where the vector (1, 2) will end up. It will just be reflected across the y-axis. But matrix-vector multiplication allows us to figure that out by just multiplying the vector and the matrix:

\[\begin{bmatrix}\mathtt{-1} & \mathtt{0}\\\mathtt{\,\,\,\,0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,2}\end{bmatrix}\]

This opens up a ton of possibilities for specifying different kinds of transformations. And it makes it pretty straightforward to specify transformations and play with them—just set the two column vectors of your matrix and see what happens! We can rotate and reflect the column vectors and scale them up together or separately.


Let’s start with rotations. And we’ll throw in some scaling too, just to make it more interesting. The image shows a coordinate system that has been rotated –135°, by rotating our column vectors from the identity matrix by that degree. The coordinate system has also been dilated by a factor of 0.5. This results in \(\mathtt{\triangle{ABC}}\) rotated –135° and scaled down by a half as shown.

What matrix represents this new rotated and scaled down system? The rotation of the first column vector, (1, 0), can be represented as (\(\mathtt{cos\,θ, sin\,θ}\)). And the second column vector, which is (0, 1) before the rotation, is perpendicular to the first column vector, so we just flip the components and make one of them the opposite of what it originally was:
(\(\mathtt{-sin\,θ, cos\,θ}\)). So, a general rotation matrix looks like the matrix on the left. The rotation matrix for a –135° rotation is on the right: \[\begin{bmatrix}\mathtt{cos \,θ} & \mathtt{-sin\,θ}\\\mathtt{sin\,θ} & \mathtt{\,\,\,\,\,cos\,θ}\end{bmatrix}\quad\quad\begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\\\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{-\frac{\sqrt{2}}{2}}\end{bmatrix}\]

You can eyeball that the rotation matrix is correct by interpreting the columns of the matrix as the new positions of the horizontal vector and vertical vector, respectively (the new coordinates they are pointing to). A –135° rotation is a clockwise rotation of 90° + 45°.

Now for the scaling, or dilation by a factor of 0.5. This is accomplished by the matrix on the left, which, when multiplied by the rotation matrix on the right, will give us the one combo transformation matrix: \[\begin{bmatrix}\mathtt{\frac{1}{2}} & \mathtt{0}\\\mathtt{0} & \mathtt{\frac{1}{2}}\end{bmatrix}\begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\\\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{-\frac{\sqrt{2}}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\]

The result is another 2 × 2 matrix, with two column vectors. The calculations below show how we find those two new column vectors: \[\mathtt{-\frac{\sqrt{2}}{2}}\begin{bmatrix}\mathtt{\frac{1}{2}}\\\mathtt{0}\end{bmatrix} + -\frac{\sqrt{2}}{2}\begin{bmatrix}\mathtt{0}\\\mathtt{\frac{1}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\quad\quad\mathtt{\frac{\sqrt{2}}{2}}\begin{bmatrix}\mathtt{\frac{1}{2}}\\\mathtt{0}\end{bmatrix} + -\frac{\sqrt{2}}{2}\begin{bmatrix}\mathtt{0}\\\mathtt{\frac{1}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\]

Now for the Point of Rotation

We’ve got just one problem left. Our transformation matrix, let’s call it \(\mathtt{A}\), is perfect, but we don’t rotate around the origin. So, we have to do some adding to get our final expression. To rotate, for example, point B around point C, we don’t use point B’s position vector from the origin—we rewrite this vector as though point C were the origin. So, point B has a position vector of B – C = (1, 0) in the point C–centered system. Once we’re done rotating this new position vector for point B, we have to add the position vector for C back to the result. So, we get: \[\mathtt{B’} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \begin{bmatrix}\mathtt{2}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\\\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\end{bmatrix}\]

Which gives us a result, for point B’, of approximately (1.65, 1.65). We can do the calculation for point A as well: \[\,\,\,\,\,\mathtt{A’} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,2}\end{bmatrix} + \begin{bmatrix}\mathtt{2}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{2\,+\,\frac{3\sqrt{2}}{4}}\\\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\end{bmatrix}\]

This puts A’ at about (3.06, 1.65). Looks right! By the way, the determinant is \(\mathtt{\frac{1}{4}}\)—go calculate that for yourself. This is no surprise, of course, since a dilation by a factor of 0.5 will scale areas down by one fourth. The rotation has no effect on the determinant, because rotations do not affect areas.

Our general formula, then, for a rotation through \(\mathtt{θ}\) of some point \(\mathtt{x}\) (as represented by a position vector) about some point \(\mathtt{r}\) (also represented by a position vector) is: \[\mathtt{x’} = \begin{bmatrix}\mathtt{cos\,θ} & \mathtt{-sin\,θ}\\\mathtt{sin\,θ} & \mathtt{\,\,\,\,\,cos\,θ}\end{bmatrix}\begin{bmatrix}\mathtt{x_1\,-\,r_1}\\\mathtt{x_2\,-\,r_2}\end{bmatrix} + \begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\]

The Determinant, Briefly

I want to get to moving stuff around using vectors and matrices, but I’ll stop for a second and touch on the determinant, since linear algebra seems to think it’s important. And, to be honest, it is kind of interesting.

The determinant is the area of the parallelogram created by two vectors. Two vectors will always create a parallelogram like the one shown below, unless they are just scaled versions of each other—but we’ll get to that.

The two vectors shown here are \(\color{blue}{\mathtt{u} = \begin{bmatrix}\mathtt{u_1}\\\mathtt{u_2}\end{bmatrix}}\) and \(\color{red}{\mathtt{v} = \begin{bmatrix}\mathtt{v_1}\\\mathtt{v_2}\end{bmatrix}}\).

We can determine the area of the parallelogram by first determining the area of the large rectangle and then subtracting the triangle areas. Note, by the way, that there are two pairs of two congruent triangles.

So, the area of the large rectangle is \(\mathtt{(u_1 + -v_1)(u_2 + v_2)}\). The negative is interesting. We need it because we want to use positive values when calculating the area of the rectangle. If you play around with different pairs of vectors and different rectangles, you will notice that one of the vector components will always have to be negative in the area calculation, if a parallelogram is formed.

The two large congruent right triangles have a combined area of \(\mathtt{u_{1}u_{2}}\). And the two smaller congruent right triangles have a combined area of \(\mathtt{-v_{1}v_{2}}\). Thus, distributing and subtracting, we get \[\mathtt{u_{1}u_{2} + u_{1}v_{2} – v_{1}u_{2} – v_{1}v_{2} – u_{1}u_{2} – (-v_{1}v_{2})}\]

Then, after simplifying, we have \(\mathtt{u_{1}v_{2} – u_{2}v_{1}}\). If the two vectors u and v represented a linear transformation and were written as column vectors in a matrix, then we could say that there is a determinant of the matrix and show the determinant of the matrix in the way it is usually presented: \[\begin{vmatrix}\mathtt{u_1} & \mathtt{v_1}\\\mathtt{u_2} & \mathtt{v_2}\end{vmatrix} = \mathtt{u_{1}v_{2} – u_{2}v_{1}}\]

One thing to note is that this is a signed area. The sign records a change in orientation that we won’t go into at the moment. In fact, describing the determinant as an area is a little misleading. When you look at transformations, the determinant tells you the scale factor of the change in area. A determinant of 1 would mean that areas did not change, etc. Also, if we have vectors that are simply scaled versions of one another—the components of one vector are scaled versions of the other—then the determinant will be zero, which is pretty much what we want, since the area will be zero. Let’s use lambda (\(\mathtt{\lambda}\)) as our scalar to be cool. \[\,\,\,\,\,\,\quad\,\,\,\,\,\begin{vmatrix}\mathtt{u_1} & \mathtt{\lambda u_1}\\\mathtt{u_2} & \mathtt{\lambda u_2}\end{vmatrix} = \mathtt{\lambda u_{1}u_{2} – \lambda u_{1}u_{2} = 0}\]

A Matrix and a Transformation

So, we’ve jumped around a bit in what is turning into an introduction to linear algebra. The posts here, here, here, here, here, and here show the ground we’ve covered so far—although, saying it that way implies that we’ve moved along continuous patches of ground, which is certainly not true. We skipped over adding and scaling vectors and have focused on concepts which have close analogs to current high school algebra and geometry topics.

Now we’ll jump to the concept of a matrix. A matrix gives you information about two arrows—the x-axis arrow, if you will, and the y-axis arrow. The matrix below, for example, tells you that you are in the familiar xy coordinate plane, with the x arrow, or x vector, extending from the origin to (1, 0) and the y arrow, or y vector, going from the origin to (0, 1).

\[\begin{bmatrix}\mathtt{\color{blue}{1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{0}} & \mathtt{\color{orange}{1}}\end{bmatrix}\]

This is a kind of home-base matrix, and it is called the identity matrix. If we multiply a vector by this matrix, we’ll always get back the vector we put in. The equation below shows how this matrix-vector multiplication is done with the identity matrix and the vector (1, 2), as shown at the right.

\(\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\)

As you can see on the far right of the equation, the result is (1 + 0, 0 + 2), or (1, 2), the vector we started with.

A Linear Transformation

Now let’s take the vector at (1, 2) and map it to (0, 2). We’re looking for a matrix that can accomplish this—a transformation of the coordinate system that will map (1, 2) to (0, 2). If we shrink the horizontal vector to (0, 0) and keep the vertical vector the same, that would seem to do the trick.

\(\begin{bmatrix}\mathtt{0} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{0}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(0) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\)

And it does! This matrix is called a shear matrix, and it takes any vector and shmooshes it onto the y-axis. We could do the same for any vector and the x-axis by zeroing out the second column of the matrix and keeping the first column the same.

You can try out all kinds of different numbers to see their effects. You can do rotations, reflections, and scalings, among other things. The transformation shown at right, for example, where the two column vectors are taken to (1, 1) and (–1, 1), respectively, maps the vector (1, 2) to the vector (–1, 3).

\(\begin{bmatrix}\mathtt{1} & \mathtt{-1}\\\mathtt{1} & \mathtt{\,\,\,\,1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{1}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(-1)}\\\mathtt{(1)(1) + (2)(1)}\end{bmatrix}\)

You may notice, by the way, that what we did with the matrix above was to first rotate the column vectors by 45° and then scale them up by a factor of \(\mathtt{\sqrt{2}}\). We can do each of these transformations with just one matrix. \[\begin{bmatrix}\mathtt{\frac{\sqrt{2}}{\,\,2}} & \mathtt{\frac{-\sqrt{2}}{\,\,2}}\\\mathtt{\frac{\sqrt{2}}{\,\,2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\end{bmatrix} \leftarrow \textrm{Rotate by 45}^\circ \textrm{.} \quad \quad \begin{bmatrix}\mathtt{\sqrt{2}} & \mathtt{0}\\\mathtt{0} & \mathtt{\sqrt{2}}\end{bmatrix} \leftarrow \textrm{Scale up by }\sqrt{2}\textrm{.}\]

Then, we can combine these matrices by multiplying them to produce the transformation matrix we needed. Each column of one of the matrices is multiplied by both columns of the other to get the two column vectors of the resulting matrix. We’ll look at that more in the future.

Distance to a Line

I‘d almost always prefer to solve a problem using what I already know—if that can be done—than learning something I don’t know in order to solve the problem. After that, I’m happy to see how the new learning relates to what I already know. That’s what I’ll do here. There is a way to use the dot product efficiently to determine the distance of a point to a line, but we already know enough to get at it another way, so let’s start there.

So, suppose we know this information about the diagram at the right: \[\mathtt{p=}\begin{bmatrix}\mathtt{4}\\\mathtt{2}\end{bmatrix}, \,\,\,\mathtt{x=}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}, \,\,\,\mathtt{r=}\begin{bmatrix}\mathtt{-1}\\\mathtt{-3}\end{bmatrix}\] And we want to know the distance \(\mathtt{r}\) is from the line.

An equation for the distance of \(\mathtt{r}\) to the line, then—a symbolic way to identify this distance—might be given in words as follows: go to point \(\mathtt{p}\), then scale to some point on the line. From that point, scale to some point on the vector that is perpendicular to the line until you get to point \(\mathtt{r}\). In symbols, that could be written as: \[\begin{bmatrix}\mathtt{4}\\\mathtt{2}\end{bmatrix}\mathtt{+\,\,\,\, j}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}\mathtt{+\,\,\,\,k}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,\,2}\end{bmatrix}\mathtt{\,\,=\,\,}\begin{bmatrix}\mathtt{-1}\\\mathtt{-3}\end{bmatrix}\] With the vector and scalar names, we could write this as \(\mathtt{p + j(p – x) + ka = r}\). The distance to the line depends on our figuring out what \(\mathtt{k}\) is. Once we have that, then the distance is just \(\mathtt{\sqrt{(ka_1)^2 + (ka_2)^2}}\).

We can subtract vectors from both sides of an equation just like we do with scalar values. Subtracting the vector (4, 2) from both sides, we get an equation which can be rewritten as a system of two equations \[\mathtt{j}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}\mathtt{+\,\,\,\,k}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,\,2}\end{bmatrix}\mathtt{\,\,=\,\,}\begin{bmatrix}\mathtt{-5}\\\mathtt{-5}\end{bmatrix} \rightarrow \left\{\begin{align*}\mathtt{2j – k = -5} \\ \mathtt{j + 2k = -5}\end{align*}\right.\]

Solving that system gives us \(\mathtt{j = -3}\) and \(\mathtt{k = -1}\). So, the distance of \(\mathtt{r}\) to the line is \(\mathtt{\sqrt{5}.}\)

Can We Get to the Dot Product?

Maybe we can get to the dot product. I’m not sure at this point. But there are some interesting things to point out about what we’ve already done. First, we can see that the vector \(\mathtt{j(p-x)}\) is a scaling of vector \(\mathtt{(p-x)}\) along the line, which, when added to \(\mathtt{p}\), brings us to the right point on the line where some scaling of the perpendicular \(\mathtt{a}\) can intersect to give us the distance. The scalar \(\mathtt{j=-3}\) tells us to reverse the vector (2, 1) and stretch it by a factor of 3. Adding to \(\mathtt{p}\) means that all of that happens starting at point \(\mathtt{p}\).

Then the scalar \(\mathtt{k=-1}\) reverses the direction of \(\mathtt{a}\) to take us to \(\mathtt{r}\).

We can then use this diagram to at least show how the dot product gets us there. We modify it a little to include the parts we will need and talk about.

Okay, here we go. Let’s consider the dot product \(\mathtt{-a \cdot (r – p)}\). We know that since \(\mathtt{-a}\) and \(\mathtt{x-p}\) are perpendicular, their dot product is 0, but this is \(\mathtt{r-p}\), not \(\mathtt{x-p}\). So, \(\mathtt{-a \cdot (r – p)}\) will likely have some nonzero value. Their dot product is this \[\mathtt{a \cdot (r – p) = |-a||r-p|\textrm{cos}(θ)}\] We got this by rearranging the formula we saw here.

We also know, however, that we can use the cosine of the same angle in representing the distance, d: \[\mathtt{d=|r-p|\textrm{cos}(θ)}\]

Putting those two equations together, we get \(\mathtt{d = \frac{a \cdot (r – p)}{|a|}}\).

We can forget about the negative in front of \(\mathtt{a}\). But you may want to play around with it to convince yourself of that. A nice feature of determining the distance this way is that the distance is signed. It is negative below the line and positive above it.