Rotations with Matrices

Okay, now let’s move stuff around with linear algebra. We’ll eventually do rotations, reflections, and maybe translations too, while mixing that up with stretchings and skewings and other things that matrices can do for us.

We learned here that a matrix gives us information about two arrows—the x-axis arrow and the y-axis arrow. What we really mean is that a 2 × 2 matrix represents a transformation of 2D space. This transformation is given by 2 column vectors—the 2 columns of the matrix. The identity matrix, as we saw previously, represents the do-nothing transformation:

\[\begin{bmatrix}\mathtt{\color{blue}{1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{0}} & \mathtt{\color{orange}{1}}\end{bmatrix} \leftarrow \begin{bmatrix}\mathtt{\color{blue}{1}}\\\mathtt{\color{blue}{0}}\end{bmatrix} \text{and} \begin{bmatrix}\mathtt{\color{orange}{0}}\\\mathtt{\color{orange}{1}}\end{bmatrix}\]


Another way to look at this matrix is that it tells us about the 2D space we’re looking at and how to interpret ANY vector in that space. So, what does the vector (1, 2) mean here? It means take 1 of the (1, 0) vectors and add 2 of the (0, 1) vectors.

\[\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\]


But what if we reflect the entire coordinate plane across the y-axis? That’s a new system, and it’s a system given by where the blue and orange vectors would be under that reflection:

\[\begin{bmatrix}\mathtt{\color{blue}{-1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{\,\,\,\,0}} & \mathtt{\color{orange}{1}}\end{bmatrix} \leftarrow \begin{bmatrix}\mathtt{\color{blue}{-1}}\\\mathtt{\color{blue}{\,\,\,\,0}}\end{bmatrix} \text{and} \begin{bmatrix}\mathtt{\color{orange}{0}}\\\mathtt{\color{orange}{1}}\end{bmatrix}\]

In that new system, we can guess where the vector (1, 2) will end up. It will just be reflected across the y-axis. But matrix-vector multiplication allows us to figure that out by just multiplying the vector and the matrix:

\[\begin{bmatrix}\mathtt{-1} & \mathtt{0}\\\mathtt{\,\,\,\,0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,2}\end{bmatrix}\]


This opens up a ton of possibilities for specifying different kinds of transformations. And it makes it pretty straightforward to specify transformations and play with them—just set the two column vectors of your matrix and see what happens! We can rotate and reflect the column vectors and scale them up together or separately.

Rotations

Let’s start with rotations. And we’ll throw in some scaling too, just to make it more interesting. The image shows a coordinate system that has been rotated –135°, by rotating our column vectors from the identity matrix by that degree. The coordinate system has also been dilated by a factor of 0.5. This results in \(\mathtt{\triangle{ABC}}\) rotated –135° and scaled down by a half as shown.

What matrix represents this new rotated and scaled down system? The rotation of the first column vector, (1, 0), can be represented as (\(\mathtt{cos\,θ, sin\,θ}\)). And the second column vector, which is (0, 1) before the rotation, is perpendicular to the first column vector, so we just flip the components and make one of them the opposite of what it originally was:
(\(\mathtt{-sin\,θ, cos\,θ}\)). So, a general rotation matrix looks like the matrix on the left. The rotation matrix for a –135° rotation is on the right: \[\begin{bmatrix}\mathtt{cos \,θ} & \mathtt{-sin\,θ}\\\mathtt{sin\,θ} & \mathtt{\,\,\,\,\,cos\,θ}\end{bmatrix}\quad\quad\begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\\\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{-\frac{\sqrt{2}}{2}}\end{bmatrix}\]

You can eyeball that the rotation matrix is correct by interpreting the columns of the matrix as the new positions of the horizontal vector and vertical vector, respectively (the new coordinates they are pointing to). A –135° rotation is a clockwise rotation of 90° + 45°.

Now for the scaling, or dilation by a factor of 0.5. This is accomplished by the matrix on the left, which, when multiplied by the rotation matrix on the right, will give us the one combo transformation matrix: \[\begin{bmatrix}\mathtt{\frac{1}{2}} & \mathtt{0}\\\mathtt{0} & \mathtt{\frac{1}{2}}\end{bmatrix}\begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\\\mathtt{-\frac{\sqrt{2}}{2}} & \mathtt{-\frac{\sqrt{2}}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\]

The result is another 2 × 2 matrix, with two column vectors. The calculations below show how we find those two new column vectors: \[\mathtt{-\frac{\sqrt{2}}{2}}\begin{bmatrix}\mathtt{\frac{1}{2}}\\\mathtt{0}\end{bmatrix} + -\frac{\sqrt{2}}{2}\begin{bmatrix}\mathtt{0}\\\mathtt{\frac{1}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\quad\quad\mathtt{\frac{\sqrt{2}}{2}}\begin{bmatrix}\mathtt{\frac{1}{2}}\\\mathtt{0}\end{bmatrix} + -\frac{\sqrt{2}}{2}\begin{bmatrix}\mathtt{0}\\\mathtt{\frac{1}{2}}\end{bmatrix} = \begin{bmatrix}\mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\]

Now for the Point of Rotation

We’ve got just one problem left. Our transformation matrix, let’s call it \(\mathtt{A}\), is perfect, but we don’t rotate around the origin. So, we have to do some adding to get our final expression. To rotate, for example, point B around point C, we don’t use point B’s position vector from the origin—we rewrite this vector as though point C were the origin. So, point B has a position vector of B – C = (1, 0) in the point C–centered system. Once we’re done rotating this new position vector for point B, we have to add the position vector for C back to the result. So, we get: \[\mathtt{B’} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \begin{bmatrix}\mathtt{2}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\\\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\end{bmatrix}\]

Which gives us a result, for point B’, of approximately (1.65, 1.65). We can do the calculation for point A as well: \[\,\,\,\,\,\mathtt{A’} = \begin{bmatrix}\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{4}}\\\mathtt{-\frac{\sqrt{2}}{4}} & \mathtt{-\frac{\sqrt{2}}{4}}\end{bmatrix}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,2}\end{bmatrix} + \begin{bmatrix}\mathtt{2}\\\mathtt{2}\end{bmatrix} = \begin{bmatrix}\mathtt{2\,+\,\frac{3\sqrt{2}}{4}}\\\mathtt{2\,-\,\frac{\sqrt{2}}{4}}\end{bmatrix}\]

This puts A’ at about (3.06, 1.65). Looks right! By the way, the determinant is \(\mathtt{\frac{1}{4}}\)—go calculate that for yourself. This is no surprise, of course, since a dilation by a factor of 0.5 will scale areas down by one fourth. The rotation has no effect on the determinant, because rotations do not affect areas.

Our general formula, then, for a rotation through \(\mathtt{θ}\) of some point \(\mathtt{x}\) (as represented by a position vector) about some point \(\mathtt{r}\) (also represented by a position vector) is: \[\mathtt{x’} = \begin{bmatrix}\mathtt{cos\,θ} & \mathtt{-sin\,θ}\\\mathtt{sin\,θ} & \mathtt{\,\,\,\,\,cos\,θ}\end{bmatrix}\begin{bmatrix}\mathtt{x_1\,-\,r_1}\\\mathtt{x_2\,-\,r_2}\end{bmatrix} + \begin{bmatrix}\mathtt{r_1}\\\mathtt{r_2}\end{bmatrix}\]

The Determinant, Briefly

I want to get to moving stuff around using vectors and matrices, but I’ll stop for a second and touch on the determinant, since linear algebra seems to think it’s important. And, to be honest, it is kind of interesting.

The determinant is the area of the parallelogram created by two vectors. Two vectors will always create a parallelogram like the one shown below, unless they are just scaled versions of each other—but we’ll get to that.

The two vectors shown here are \(\color{blue}{\mathtt{u} = \begin{bmatrix}\mathtt{u_1}\\\mathtt{u_2}\end{bmatrix}}\) and \(\color{red}{\mathtt{v} = \begin{bmatrix}\mathtt{v_1}\\\mathtt{v_2}\end{bmatrix}}\).

We can determine the area of the parallelogram by first determining the area of the large rectangle and then subtracting the triangle areas. Note, by the way, that there are two pairs of two congruent triangles.

So, the area of the large rectangle is \(\mathtt{(u_1 + -v_1)(u_2 + v_2)}\). The negative is interesting. We need it because we want to use positive values when calculating the area of the rectangle. If you play around with different pairs of vectors and different rectangles, you will notice that one of the vector components will always have to be negative in the area calculation, if a parallelogram is formed.

The two large congruent right triangles have a combined area of \(\mathtt{u_{1}u_{2}}\). And the two smaller congruent right triangles have a combined area of \(\mathtt{-v_{1}v_{2}}\). Thus, distributing and subtracting, we get \[\mathtt{u_{1}u_{2} + u_{1}v_{2} – v_{1}u_{2} – v_{1}v_{2} – u_{1}u_{2} – (-v_{1}v_{2})}\]

Then, after simplifying, we have \(\mathtt{u_{1}v_{2} – u_{2}v_{1}}\). If the two vectors u and v represented a linear transformation and were written as column vectors in a matrix, then we could say that there is a determinant of the matrix and show the determinant of the matrix in the way it is usually presented: \[\begin{vmatrix}\mathtt{u_1} & \mathtt{v_1}\\\mathtt{u_2} & \mathtt{v_2}\end{vmatrix} = \mathtt{u_{1}v_{2} – u_{2}v_{1}}\]

One thing to note is that this is a signed area. The sign records a change in orientation that we won’t go into at the moment. Also, if we have vectors that are simply scaled versions of one another—the components of one vector are scaled versions of the other—then the determinant will be zero, which is pretty much what we want, since the area will be zero. Let’s use lambda (\(\mathtt{\lambda}\)) as our scalar to be cool. \[\,\,\,\,\,\,\quad\,\,\,\,\,\begin{vmatrix}\mathtt{u_1} & \mathtt{\lambda u_1}\\\mathtt{u_2} & \mathtt{\lambda u_2}\end{vmatrix} = \mathtt{\lambda u_{1}u_{2} – \lambda u_{1}u_{2} = 0}\]

A Matrix and a Transformation

So, we’ve jumped around a bit in what is turning into an introduction to linear algebra. The posts here, here, here, here, here, and here show the ground we’ve covered so far—although, saying it that way implies that we’ve moved along continuous patches of ground, which is certainly not true. We skipped over adding and scaling vectors and have focused on concepts which have close analogs to current high school algebra and geometry topics.

Now we’ll jump to the concept of a matrix. A matrix gives you information about two arrows—the x-axis arrow, if you will, and the y-axis arrow. The matrix below, for example, tells you that you are in the familiar xy coordinate plane, with the x arrow, or x vector, extending from the origin to (1, 0) and the y arrow, or y vector, going from the origin to (0, 1).

\[\begin{bmatrix}\mathtt{\color{blue}{1}} & \mathtt{\color{orange}{0}}\\\mathtt{\color{blue}{0}} & \mathtt{\color{orange}{1}}\end{bmatrix}\]

This is a kind of home-base matrix, and it is called the identity matrix. If we multiply a vector by this matrix, we’ll always get back the vector we put in. The equation below shows how this matrix-vector multiplication is done with the identity matrix and the vector (1, 2), as shown at the right.

\(\begin{bmatrix}\mathtt{1} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\)

As you can see on the far right of the equation, the result is (1 + 0, 0 + 2), or (1, 2), the vector we started with.

A Linear Transformation

Now let’s take the vector at (1, 2) and map it to (0, 2). We’re looking for a matrix that can accomplish this—a transformation of the coordinate system that will map (1, 2) to (0, 2). If we shrink the horizontal vector to (0, 0) and keep the vertical vector the same, that would seem to do the trick.

\(\begin{bmatrix}\mathtt{0} & \mathtt{0}\\\mathtt{0} & \mathtt{1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{0}\\\mathtt{0}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{0}\\\mathtt{1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(0) + (2)(0)}\\\mathtt{(1)(0) + (2)(1)}\end{bmatrix}\)

And it does! This matrix is called a shear matrix, and it takes any vector and shmooshes it onto the y-axis. We could do the same for any vector and the x-axis by zeroing out the second column of the matrix and keeping the first column the same.

You can try out all kinds of different numbers to see their effects. You can do rotations, reflections, and scalings, among other things. The transformation shown at right, for example, where the two column vectors are taken to (1, 1) and (–1, 1), respectively, maps the vector (1, 2) to the vector (–1, 3).

\(\begin{bmatrix}\mathtt{1} & \mathtt{-1}\\\mathtt{1} & \mathtt{\,\,\,\,1}\end{bmatrix}\begin{bmatrix}\mathtt{1}\\\mathtt{2}\end{bmatrix} = \mathtt{1}\begin{bmatrix}\mathtt{1}\\\mathtt{1}\end{bmatrix} + \mathtt{2}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,1}\end{bmatrix} = \begin{bmatrix}\mathtt{(1)(1) + (2)(-1)}\\\mathtt{(1)(1) + (2)(1)}\end{bmatrix}\)

You may notice, by the way, that what we did with the matrix above was to first rotate the column vectors by 45° and then scale them up by a factor of \(\mathtt{\sqrt{2}}\). We can do each of these transformations with just one matrix. \[\begin{bmatrix}\mathtt{\frac{\sqrt{2}}{\,\,2}} & \mathtt{\frac{-\sqrt{2}}{\,\,2}}\\\mathtt{\frac{\sqrt{2}}{\,\,2}} & \mathtt{\,\,\,\,\frac{\sqrt{2}}{2}}\end{bmatrix} \leftarrow \textrm{Rotate by 45}^\circ \textrm{.} \quad \quad \begin{bmatrix}\mathtt{\sqrt{2}} & \mathtt{0}\\\mathtt{0} & \mathtt{\sqrt{2}}\end{bmatrix} \leftarrow \textrm{Scale up by }\sqrt{2}\textrm{.}\]

Then, we can combine these matrices by multiplying them to produce the transformation matrix we needed. Each column of one of the matrices is multiplied by both columns of the other to get the two column vectors of the resulting matrix. We’ll look at that more in the future.

Distance to a Line

I‘d almost always prefer to solve a problem using what I already know—if that can be done—than learning something I don’t know in order to solve the problem. After that, I’m happy to see how the new learning relates to what I already know. That’s what I’ll do here. There is a way to use the dot product efficiently to determine the distance of a point to a line, but we already know enough to get at it another way, so let’s start there.

So, suppose we know this information about the diagram at the right: \[\mathtt{p=}\begin{bmatrix}\mathtt{4}\\\mathtt{2}\end{bmatrix}, \,\,\,\mathtt{x=}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}, \,\,\,\mathtt{r=}\begin{bmatrix}\mathtt{-1}\\\mathtt{-3}\end{bmatrix}\] And we want to know the distance \(\mathtt{r}\) is from the line.

An equation for the distance of \(\mathtt{r}\) to the line, then—a symbolic way to identify this distance—might be given in words as follows: go to point \(\mathtt{p}\), then scale to some point on the line. From that point, scale to some point on the vector that is perpendicular to the line until you get to point \(\mathtt{r}\). In symbols, that could be written as: \[\begin{bmatrix}\mathtt{4}\\\mathtt{2}\end{bmatrix}\mathtt{+\,\,\,\, j}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}\mathtt{+\,\,\,\,k}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,\,2}\end{bmatrix}\mathtt{\,\,=\,\,}\begin{bmatrix}\mathtt{-1}\\\mathtt{-3}\end{bmatrix}\] With the vector and scalar names, we could write this as \(\mathtt{p + j(p – x) + ka = r}\). The distance to the line depends on our figuring out what \(\mathtt{k}\) is. Once we have that, then the distance is just \(\mathtt{\sqrt{(ka_1)^2 + (ka_2)^2}}\).

We can subtract vectors from both sides of an equation just like we do with scalar values. Subtracting the vector (4, 2) from both sides, we get an equation which can be rewritten as a system of two equations \[\mathtt{j}\begin{bmatrix}\mathtt{2}\\\mathtt{1}\end{bmatrix}\mathtt{+\,\,\,\,k}\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,\,2}\end{bmatrix}\mathtt{\,\,=\,\,}\begin{bmatrix}\mathtt{-5}\\\mathtt{-5}\end{bmatrix} \rightarrow \left\{\begin{align*}\mathtt{2j – k = -5} \\ \mathtt{j + 2k = -5}\end{align*}\right.\]

Solving that system gives us \(\mathtt{j = -3}\) and \(\mathtt{k = -1}\). So, the distance of \(\mathtt{r}\) to the line is \(\mathtt{\sqrt{5}.}\)

Can We Get to the Dot Product?

Maybe we can get to the dot product. I’m not sure at this point. But there are some interesting things to point out about what we’ve already done. First, we can see that the vector \(\mathtt{j(p-x)}\) is a scaling of vector \(\mathtt{(p-x)}\) along the line, which, when added to \(\mathtt{p}\), brings us to the right point on the line where some scaling of the perpendicular \(\mathtt{a}\) can intersect to give us the distance. The scalar \(\mathtt{j=-3}\) tells us to reverse the vector (2, 1) and stretch it by a factor of 3. Adding to \(\mathtt{p}\) means that all of that happens starting at point \(\mathtt{p}\).

Then the scalar \(\mathtt{k=-1}\) reverses the direction of \(\mathtt{a}\) to take us to \(\mathtt{r}\).

We can then use this diagram to at least show how the dot product gets us there. We modify it a little to include the parts we will need and talk about.

Okay, here we go. Let’s consider the dot product \(\mathtt{-a \cdot (r – p)}\). We know that since \(\mathtt{-a}\) and \(\mathtt{x-p}\) are perpendicular, their dot product is 0, but this is \(\mathtt{r-p}\), not \(\mathtt{x-p}\). So, \(\mathtt{-a \cdot (r – p)}\) will likely have some nonzero value. Their dot product is this \[\mathtt{a \cdot (r – p) = |-a||r-p|\textrm{cos}(θ)}\] We got this by rearranging the formula we saw here.

We also know, however, that we can use the cosine of the same angle in representing the distance, d: \[\mathtt{d=|r-p|\textrm{cos}(θ)}\]

Putting those two equations together, we get \(\mathtt{d = \frac{a \cdot (r – p)}{|a|}}\).

We can forget about the negative in front of \(\mathtt{a}\). But you may want to play around with it to convince yourself of that. A nice feature of determining the distance this way is that the distance is signed. It is negative below the line and positive above it.

Dot Product Deep(ish) Dive

The dot product is helpful in finding the distance of a point to a line. The dot product, as we mentioned here, is the the sum of the element-wise products of the vector components. Given two vectors \(\mathtt{v}\) and \(\mathtt{w}\), their dot product is \[\begin{bmatrix}\mathtt{v_1}\\\mathtt{v_2}\end{bmatrix} \cdot \begin{bmatrix}\mathtt{w_1}\\\mathtt{w_2}\end{bmatrix}\mathtt{= v_1w_1 + v_2w_2}\]

The result of this computation is not another vector, but just a number, a scalar quantity. And, given that the dot product of two perpendicular vectors is 0, it would be nice if the dot product were related to cosine in some way, since the cosine of 90° is also 0. So let’s take a look at some vector pairs and their dot products and think about any patterns we see.

\(\mathtt{v \cdot w=-4}\)     \(\mathtt{θ=180^{\circ}}\)     \(\mathtt{\textrm{cos}(θ)=-1}\)

\(\mathtt{v \cdot w=-2}\)     \(\mathtt{θ=120^{\circ}}\)     \(\mathtt{\textrm{cos}(θ)=-\frac{1}{2}}\)

\(\mathtt{v \cdot w=4}\)     \(\mathtt{θ=45^{\circ}}\)     \(\mathtt{\textrm{cos}(θ)=\frac{\sqrt{2}}{2}}\)

\(\mathtt{v \cdot w=2}\)     \(\mathtt{θ=60^{\circ}}\)     \(\mathtt{\textrm{cos}(θ)=\frac{1}{2}}\)

Well, so, the dot products have the same signs as the cosines. That’s a start. And in all but one case shown, we can divide the dot product by 4 to get the cosine. What makes the 45° case different?

Each of the vectors shown, with the exception of the vector (2, 2) has a length, a magnitude, of 2. To determine the magnitude, or length, of a vector, you treat the components of the vector as the legs of a right triangle and the vector itself as the hypotenuse. So, \[|\begin{bmatrix}\mathtt{-1}\\\mathtt{\sqrt{3}}\end{bmatrix}|=\sqrt{(-1)^2+(\sqrt{3})^2}=2\]

But the length of (2, 2) is \(\mathtt{\sqrt{8}}\). If we were to give that vector a length of 2, without changing the angle between v and w, then the vector would become (\(\mathtt{\sqrt{2}, \sqrt{2}}\)). And, lo, the dot product would become \(\mathtt{2\sqrt{2}}\), which, when divided by 4, would yield the cosine.

The 4 that we divide by isn’t random. It’s the product of the lengths of the vectors. If we leave the 45° angled vectors alone, the product of their lengths is \(\mathtt{2\sqrt{8}}\). Dividing 4 by this product does indeed yield the correct cosine. So, we have an initial conjecture that the dot product of two vectors v and w relates to cosine like this: \[\mathtt{\frac{v \cdot w}{|v||w|} = cos(θ)}\]

Perpendicular vectors will still have a dot product of 0 with this formula, so that’s good. And we can scale the vectors however we want and the cosine should remain the same—as it should be—though it may take a little manipulation to see that that’s true. But we are still left with the puzzle of proving this conjecture, more or less, or at least demonstrating to our satisfaction that the result is general.

Although the derivation doesn’t go beyond the Pythagorean Theorem, really, it gets a little symbol heavy, so let’s start with something simpler. We can write the cosine of θ at the right as \[\mathtt{\textrm{cos}(θ)=\frac{|w|}{|v|}}\] If we think of w here as truly horizontal, its length is simply \(\mathtt{v_1}\), the length of the horizontal component of v. Combining this fact with the length of v, we can rewrite the cosine equation above as \[\mathtt{\textrm{cos}(θ)=\frac{v_1}{\sqrt{v_{1}^2+v_{2}^2}}}\]

Since w is horizontal (has a second component of 0), the dot product \(\mathtt{v \cdot w}\) becomes simply \(\mathtt{v_{1}^2}\). Dividing this by the product of the lengths of the vectors v and w (where the length of w is just \(\mathtt{v_1}\)), we get this equation for cosine: \[\mathtt{\,\,\,\,\,\textrm{cos}(θ)=\frac{v_{1}^2}{(v_1)(\sqrt{v_{1}^2+v_{2}^2})}}\] And that’s clearly equal to the above. So, while it is by no means definitive, we can have a little more confidence at this point that we have the right equation for cosine using the dot product. We can get more formal and sure about it later. Next time we’ll look at how it can help us determine the distance from a point to a line.

Implicit Equations for Lines

So, I’ve covered parametric lines already. Another form in which we can write equations for lines using linear algebra is implicit form.

The parameter in the parametric form of a line was a scalar \(\mathtt{k}\). We built the parametric form using a position vector to get us to a starting point on the line. Then we added this to the product of the slope vector and the parameter \(\mathtt{k}\) to get all the other points on the line. The implicitness of the implicit form comes from the fact that we build the equation using the slope vector and a vector perpendicular to the slope vector.

I mentioned back here that perpendicular vectors always have a dot product of 0. So, thinking of \(\mathtt{x-p}\) as the (slope) vector of our line, then \(\mathtt{a \cdot (x-p) = 0}\). With the actual values shown here, we have \[\begin{bmatrix}\mathtt{-1}\\\mathtt{\,\,\,\,3}\end{bmatrix} \cdot \begin{bmatrix}\mathtt{3}\\\mathtt{1}\end{bmatrix}\mathtt{ = (-1)(3) + (3)(1) = 0}\] If we know one point \(\mathtt{x}\) on the line, then the dot product equation is true for any \(\mathtt{p}\) and identifies a unique line. Let’s represent all the parts here as vectors, and more generally. \[\begin{bmatrix}\mathtt{a_1}\\\mathtt{a_2}\end{bmatrix} \cdot (\begin{bmatrix}\mathtt{x_1}\\\mathtt{x_2}\end{bmatrix} \mathtt{- }\begin{bmatrix}\mathtt{p_1}\\\mathtt{p_2}\end{bmatrix}) \mathtt{\,\,= 0 \rightarrow a_1x_1 + a_2x_2 + (-a_1p_1 – a_2p_2) = 0}\]

What’s cool about this equation is that we are all familiar with its form, \(\mathtt{ax_1 + bx_2 + c = 0}\), so long as we let \(\mathtt{a = a_1, b = a_2,}\) and \(\mathtt{c = -a_1p_1 – a_2p_2}\). This is what is called the general form or standard form of a linear equation. Even more interesting is that the coefficients in this form help to describe a vector perpendicular to the line.

Knowing the above and \(\mathtt{y:=x_2}\), we can write the equation for the line at the right as \[\mathtt{4x+3y-(4)(-5)-(3)(6)=0}\] And then, working out that c-value, we get \(\mathtt{4x + 3y + 2 = 0}\). The vector a = (4, 3), which we can rewrite as the ratio –4 : 3, describes the slope of the line.

Now we can easily slide back and forth between linear algebra and plain old current high school algebra with certain linear equations.

Source

Line Segments with Linear Algebra

We saw last time that the parametric equation of a line is given by \(\mathtt{l(k) = p + kv}\), where p is a point on the line (written as a vector), v is a free vector indicating the slope of the line, and k is a scalar value called the parameter. Turning the knob to change k gives you different points on the line. At the right is the line

\[\mathtt{l(k) =} \begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix} + \begin{bmatrix}\mathtt{\,\,\,\,1}\\\mathtt{-1}\end{bmatrix}\mathtt{k}\]

Substituting different numbers for k gives us different points on the line. These resolve into position vectors.

This setup makes it fairly easy to make a line segment, and to partition that line segment into any ratio you want (this will be our ‘current high school connection’ for this post).

When \(\mathtt{k = 0}\), we get our position vector back: \(\begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix}\). This is the point (1, 3). When \(\mathtt{k = 1}\), we have … \[ \begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix} + \begin{bmatrix}\mathtt{\,\,\,\,1 \cdot 1}\\\mathtt{-1 \cdot 1}\end{bmatrix} = \begin{bmatrix}\mathtt{1 + 1}\\\mathtt{3 + (-1)}\end{bmatrix}\]

… which is the point (2, 2). And so on to generate all the points on the line. To generate a line segment from (1, 3) to, say, the point where the line crosses the x-axis, we first have to figure out where the line crosses the x-axis. We can do this by inspection to see that it crosses at (4, 0), but let’s set it up too. We start by setting the line equal to the point (x, 0) and solving the resulting system of equations: \[\begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix} + \begin{bmatrix}\mathtt{\,\,\,\,1}\\\mathtt{-1}\end{bmatrix}\mathtt{k} = \begin{bmatrix}\mathtt{x}\\\mathtt{0}\end{bmatrix} \rightarrow \left\{\begin{array}{c}\mathtt{1+k=x}\\\mathtt{3-k=0}\end{array}\right.\]

Adding the equations, we get x = 4, so (4, 0) is indeed where the line crosses the x-axis. To generate points on the line segment from (1, 3) to (4, 0), we use position vectors for both endpoints. Then we can use what’s called a convex combination of k—which is just extremely fancy wording for coefficients that add up to 1. We scale the second position vector, (4, 0) by some k and we scale the first position vector (1, 3) by 1 – k. \[\mathtt{l(k) =} \begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix}\mathtt{(1-k)} + \begin{bmatrix}\mathtt{4}\\\mathtt{0}\end{bmatrix}\mathtt{k}\]

Want the line segment divided into fifths? Then just use k values in intervals of fifths, from 0 to 5 fifths. Transpose the k coefficients to get a different “direction” of partitioning of the line segment.


Lines the Linear Algebra Way

Let’s continue with the idea of reinterpreting some high school algebra concepts in the light of linear algebra. For example, we learn even before high school in some cases that a line on a coordinate plane can be defined by two points or it can be defined by a point and the slope of the line.

When we have two points, \(\mathtt{(x_1, y_1)}\) and \(\mathtt{(x_2, y_2)}\), we can determine the slope with \[\mathtt{\frac{y_2 – y_1}{x_2 – x_1}}\]

and then do some substitutions to work out the y-intercept.

The linear algebra way uses vectors, of course. And all we need is a point and a vector to define a line. Or, really, two vectors, since the point can be described as a position vector and the slope is also a vector.

We have the line here defined as a vector plus a scaled vector—scaled by k. (See here for adding vectors and here for scaling them.) \[\color{brown}{\begin{bmatrix}\mathtt{1}\\\mathtt{3}\end{bmatrix}} + \color{blue}{\begin{bmatrix}\mathtt{\,\,\,\,1}\\\mathtt{-1}\end{bmatrix}\mathtt{k}}\] That second, scaled, vector looks like it could do the job of defining the line all by itself, but free vectors like that don’t have a fixed location, so we need a position vector to “fix” that. In general terms, thinking about the free vector as extending from \(\mathtt{(x_1, y_1)}\) to \(\mathtt{(x_2, y_2)}\), we can write the equation for a line as \[\mathtt{l(k) = }\begin{bmatrix}\mathtt{x_1}\\\mathtt{y_1}\end{bmatrix} + \begin{bmatrix}\mathtt{x_2 – x_1}\\\mathtt{y_2 – y_1}\end{bmatrix}\mathtt{k}\] That form is called the parametric form of an equation and can be written as \(\mathtt{l(k) = p + kv}\), where p is a point (or position vector), v is the free vector, and k is a scalar value—the parameter that we change to get different points on the line.

Let’s put this into the context of a (reworded) word problem:

In 2014, County X had 783 miles of paved roads. Starting in 2015, the county has been building 8 miles of new paved roads each year. At this rate, if n is the number of years after 2014, what function gives the number of miles of paved road there will be in County X? (Assume that no paved roads go out of service.)

The equation we’re after is \(\mathtt{f(n) = 783 + 8n}\). As a vector function this can be written as \[\mathtt{f(n) = }\begin{bmatrix}\mathtt{0}\\\mathtt{783}\end{bmatrix} + \begin{bmatrix}\mathtt{1}\\\mathtt{8}\end{bmatrix}\mathtt{n}\] We can see here, perhaps a little more clearly with the vector representation, that our domain is restricted by the situation. Our parameter n is, at the very least, a positive real number, and really a positive integer.

It seems to me that here is at least one other example of a close relationship between linear algebra and current high school algebra instruction that would make absorbing linear algebra into some high school material feasible.


Zukei and Dot Products

Zukei puzzles that ask students to find right triangles seem to rely on an understanding of perpendicularity that is situated more comfortably in linear algebra than in Euclidean geometry. Consider the following, which has a hidden isosceles right triangle in it. Your job is to find the vertices of that isosceles right triangle.

zukei

High school students would be expected to look at perpendicularity either intuitively—searching for square corners—or using the slope criteria, that perpendicular lines have slopes which are negative reciprocals of each other. But it seems a bit much to start treating this puzzle as a coordinate plane and determining equations of lines.

The Dot Product

In all fairness, the dot product is a bit much too. Instead, we can operationalize slopes with negative reciprocals by, for example, starting from any point, counting 1, 2, . . . n to the left or right and then 1, 2, . . . n up or down to get to the next point. From that point, we have to count left-rights in the way we previously counted up-downs and up-downs in the way we counted left-rights, and we have to reverse one of those directions. For the puzzle above, we count 1, 2 to the right from a point and then 1, 2 up to the next point. From that second point, it’s 1, 2 right and then 1, 2 down. It’s a little harder to see our counting and direction-switching rule at work when the slopes are 1 and –1, but, in the Zukei context at least, the slopes have to be 1 and –1, I think, to get an isosceles right triangle if we’re not talking about square corners.

This kind of counting is really treating the possible triangle sides as vectors. And, with perpendicular vectors, we can see that we can get something like one of these two pairs (though perpendicular vectors don’t have to look like this): \[\mathtt{\begin{bmatrix} x_1\\x_2 \end{bmatrix} \textrm{and} \begin{bmatrix} -x_2\\ \,\,\,\,x_1 \end{bmatrix} \textrm{or} \begin{bmatrix} x_1\\x_2 \end{bmatrix} \textrm{and} \begin{bmatrix} \,\,\,\,x_2\\-x_1 \end{bmatrix}}\]

The dot product is defined as the sum of the element-wise products of the vector components. In the case of perpendicular vectors, the dot product is 0. Here is the dot product of our vectors: \[\mathtt{(x_1 \cdot -x_2) + (x_2 \cdot x_1) = 0}\]

Some Programming

One reason why this way of defining perpendicularity (with a single value) is helpful is that we avoid nasty zero denominators and, therefore, undefined slopes. With the two vectors at the right, we get \[\mathtt{\begin{bmatrix}0\\5\end{bmatrix} \cdot \begin{bmatrix}4\\0\end{bmatrix} = (0)(4) + (5)(0) = 0}\]

We can take all of the points and run them through a program to find all the connected perpendicular vectors. The result ((0, 1), (1, 0)), ((0, 1), (2, 3)) below means that the vector connecting (0, 1) and (1, 0) and the vector connecting (0, 1) and (2, 3) are perpendicular.

This gives us all the perpendicular vector pairs, though it doesn’t filter out those vectors with unequal magnitudes, which we wanted in order to identify the isosceles right triangle.

There are some Zukei solvers available, though I confess I haven’t looked at any of them. No doubt, one or all of them use linear algebra rather than ordinary coordinate plane geometry to do their magic. It’s about time, I think, we start weaving linear algebra into high school algebra and geometry standards.

Solving Zukei puzzles is not the best justification for bringing linear algebra down into high school, of course. But I hope it can be a salient example of how connected linear algebra can be to a lot of high school content standards.