## Guzinta Math 6 (Windows & Mac)

This morning, I wrapped up some final work on compiling all 15 Grade 6 Guzinta Math lesson apps into one Windows application (and a few days later into one Mac application). To try it out, visit the download page.

At the moment, since there have been few downloads of the application, you may see a warning from your SmartScreen or other defender, which you can ignore. Once it has been downloaded enough (I don’t know the number) the defender warning will go away.

Modules Zero

My main project for the next year is to build a Module 0 for each lesson. At the moment, there is some useful but mostly placeholder material there. Here is a look at the lesson homepage for Fraction by Fraction Division.

What will go in Module 0? Instruction inspired by the principles of variation theory. This Module 0 will not be monitored by the Practice Meter, and will serve to provide students a scaffold into the learning in the other modules as well as provide a differentiation tool if educators should need it.

Grades 7–8 Package and Other Platforms

Before writing the zero modules, however, I’d like to finish updating the lesson apps to Version 5.0. This version includes the zero and fourth modules, and has a bit slicker look. As of this writing, I’ve got 14 lesson apps in Grades 7–8 to update. These will all be updated as individual Chrome apps first and placed on the Chrome Web Store for download to Chromebooks.

Once these are updated to Version 5.0, I can package them together (all 16 lesson apps for Grades 7–8) into one application, just as is done for Grade 6. Done.

Other Enhancements

I’ve got a lot of different enhancements in mind as well. A version for classroom use—where there is a single computer or just a few—with student login would be pretty simple to knock out. That way, individual practice meters could still be tracked while students use just one version of the application. Done. Also, timestamping and individualizing end-of-module certificates is on my mind for the future Done. —as is a repository of practice problems.

What else? A version completely online has been mentioned. I would also like to put together some how-tos and “better” practices documentation for different use cases (home school, school and home, classroom only, etc.). New content for statistics and probability standards, correlations to popular curricula. So many things that will have to wait for another day!

Notes

Another task I have on my list to do is to build up the information around the lessons. For example, as mentioned above, each of the lesson apps in this package (and in future packages) has a Module 0 that is not tracked via the Practice Meter, although it does contain questions with correct answers. This will not change—students should have at least one fairly robust module in each lesson that isn’t about being assessed in some way. Even if these modules zero will be used less frequently, it’s important to have them, for reference, for a way to test yourself before diving into the practice meter work, etc.

In some cases, too, the fourth module of an app is not linked to the Practice Meter. These are usually exploratory modules. If a Module 4 has questions whose answers can be checked (by submitting an answer and pressing the check button), then it is linked to the Practice Meter. Otherwise, it is not.

## Variation and Example Spaces

I‘ve been thinking a lot about Craig Barton’s wonderful book How I Wish I’d Taught Maths and have been scanning three of his new websites, Variation Theory, Same Surface, Different Deep Problems, and Maths Venns, as well as some research and other books on variation, and a lot of online commentary, in anticipation of starting to implement these ideas in some way.

Writing Algebraic Expressions

As I was reading the last page of Mr Barton’s Book, I was working on instruction around writing algebraic expressions, so this is the topic kind of hovering next to me wherever I go, waiting for when I have time to dig in. This topic is a little more fraught than the purely procedural examples that have been circulating, so it’s worth exploring how variation can be applied to something a little looser.

What does writing algebraic expressions involve (for a beginner)? Well, if I force myself to ignore what other people think writing algebraic expressions involves (essentially ignoring standards and any written material on the topic), then I would say that writing algebraic expressions means to write something like s + 2 or 2 + s when presented with a question like “How old in years will Sam be in exactly two years?”1

This, then, I would call the first example in my example space. Or, rather, an example of an example in the example space—because, if this example is any good, then I will use it as an instructional example to start and leave it out of variation work, which is about PRACTICE, not instruction.2 So, something like this, with the brilliantly simple Silent Teacher method, mentioned in Barton’s book (and a few other places), though without the natural pauses and instructions for students to copy down the correct worked example used during a normal classroom implementation of this.

Try This One

Write an algebraic expression to model the situation.

How old in years was Sam exactly 10 years ago?

I would include a follow-up to this process, here involving a discussion around (a) the idea that the resulting algebraic expression represents an answer to the question of how old Sam will be—it’s just that one part of that expression is not known, (b) asking students to check that the answer makes sense, here by substituting different values for s and comparing the result to the situation, (c) the idea that any letter can be chosen for the variable, and (d) perhaps drawing a visual model of the result (an annotated number line). Some of these could be packaged into the instruction and question above, of course—or perhaps I’ll decide to split this up even more, considering how much “in addition to” I’ve now done about this—but I think that, in general, leaving room for a stepping back step at the end of this is a good idea, to catch the kind of overflow that is difficult to squeeze into expositions like this.

And Now Enters Variation

The paired problem here has opened up a dimension of variation—using addition or subtraction in the expression, so we can play with that during Intelligent Practice (really love that phrase). Technically, the instruction was open to all four operations, but I think it makes sense to focus exclusively on addition and subtraction, leaving multiplication and division expressions for another round.

Here’s what I cooked up.

 How much money in dollars did Sam have if he got exactly 10 dollars? How much money in dollars did Sam have if he got exactly 10 cents? How much money in dollars did Sam have if he got exactly 2 dollars? How much money in dollars did Sam have if he lost exactly 10 cents? How much money in dollars did Sam have if he got exactly 1 dollar? How much money in dollars did Sam have if he lost exactly 1 dollar? How much money in dollars did Sam have if he got exactly 50 cents? How much money in dollars did Sam have if he lost exactly 2 dollars? How much money in dollars did Sam have if he got exactly 25 cents? How much money in dollars did Sam have if he didn’t lose or gain any money?

After this, it might be good to have students cut out the strips and place them on a number line.

It’s interesting how much my experience and training rebels against this process. What I want to get to, right away, are the difficult and ambiguous situations. In particular, I started with, and then rejected, a variation sequence involving height: How tall in inches will Sam be if he grows 2 inches? The subtraction variation is bound to confuse: How tall in inches was Sam if he grew 2 inches? That’s tricky.

But knowing about and looking out for those tricky and ambiguous and interesting situations can serve you well creating instructional routines like this. It shows you where you’re going—and your example space can be richer and broader. And if you’re serious about implementing minimally different variation like this, it shows you how far away your knowledge really is from a beginner’s. You just have to learn to have more sympathy for learners who are encountering mathematics for the first time that you’ve seen a gazillion times.

1. It’s important to me—at the moment, at least—that the examples in this example space should also involve identifying the correct unknown, rather than simply recording the unknown, as would happen with a question like, “Sam is s years old. How old will he be in 2 years?” or with an exercise of the form “2 more than a number.” In both of these cases, the unknown is entirely exposed.
2. This is an important aspect of variation that I worry will be lost on U.S. teachers. Intelligent practice can’t happen, beneficially, until some acquisition has happened. In 20 years, I haven’t seen a robust public discussion about acquisition. The rhetoric around instruction in the States treats it as just one long assessment, though almost no one realizes that’s what it has become.

## Mr Barton’s Book

It wasn’t too long ago—not even three years—that I finished reading David Didau’s terrific book (this one), so I still remember the excitement that I felt reading it, and watching all of the silly certainties of common wisdom in education being dismantled in front of my eyes, making way—I could only hope—for pedagogical practices informed by a real science of learning.

I felt a similar excitement reading Craig Barton’s book How I Wish I’d Taught Maths, because in this book, at long last, are many of those practices in one place, constructed, as readers will see, next to the debris of familiar canards and shallow reasoning that once guided parts of Barton’s teaching.

It is not a book full of proclamations about “best” practice. But you will find in this book a beautiful translation of the science of learning to the classroom. And far from the drudgery that one may imagine this to be, the joy of effective explicit instruction, for both teacher and students, comes through in every chapter of the author’s writing. It is serious, thorough, humble, and humane. And accessible: perhaps the greatest pleasure in reading it is knowing that you could turn around and start to implement many of these practices in short order—or, perhaps, that you already do these things, but don’t know why you should stick with them or how you could improve on them.

I have a lot of underlines and margin notes, but I think these three snippets together, from the chapter on problem solving and independence, are my favorites. The section starts, as they all do, with what the author used to think:

I used to love the sight of my students struggling through problems. Scratching heads, heavy sighs, and even the snap of a pencil thrown down in frustration were the soundtrack to learning. . . .

And then we are introduced to one of these problems, Question 23 from this paper (PDF), along with a deep concern for how novices will handle it. Contrast Barton’s new diagnosis below with common wisdom—that students ask why they are doing math because it is boring, tedious, procedural, or not relevant to their lives.

The task of choosing cards and calculating their totals may prove so cognitively demanding that novices do not have any spare cognitive capacity to recognise patterns. They do not realise that it is not the actual totals that matter, but whether those totals are odd or even. They just carry on regardless. Moreover, students are so consumed with the minutiae of the problem that no cognitive capacity remains to consider the global picture—why are they doing this? The result is that the novices may end up with an assortment of lists and totals, but not actually do anything with it—the fact that this is a probability question was pushed out of working memory long ago when the first set of cards was being processed.

As you might imagine, since the diagnosis is different from that received from common wisdom, the prescribed treatment is different too:

Before I set students off to work independently, I ensure they have enough domain-specific knowledge to solve problems on their own.

Although the snippets above are certainly grist for my mill, How I Wish I’d Taught Maths is not an ideological tome. It is eminently practical, taking the best ideas from all corners of the educational universe, squeezing them through the filter of cognitive science, and setting them in the right proportion to create a firm foundation that any educator—and especially any math educator—can use and build on. I highly highly recommend it to anyone who wants to strive for better in teaching and learning.

## Proprioceptive Knowledge

This paper (\$) was a nice read, with some fresh (to me) insights about discovery, instruction, and practice. There are many points in it where I don’t see eye to eye with the author, but those parts of the text are, thankfully, brief. I took away some new thoughts, at any rate, the most robust of which was an analogy between learning and proprioception, as the title suggests.

Here are the two main ideas as I see them, involving a very healthy amount of paraphrasing and extrapolation on my part:

1. An aspect of your learning over any topic that deserves attention from instruction is your subjective, first-person, thinking with the material taught.
2. Good instruction not only manipulates you into knowing something, but enlists your cooperation in doing so.

Proprioception

Proprioception is the basic human sense of where your body parts are in space and the sense of your own movement in that space (i.e., you don’t use ‘touch’ to know where your left hand is in space; this is proprioception). For learning something abstract like adding fractions with unlike denominators, we might think of proprioceptive knowledge as what it is LIKE to add fractions with unlike denominators—physically, cognitively, etc. Certainly carrying out the computations procedurally is an important part of “what it is like” but there are many others, including “what it is like” to identify situations calling for working out common denominators.

The first paper uses as a candidate for proprioceptive knowledge (although they don’t call it that) an example of working long division to produce repeating decimals. Students are instructed, with an example, that for any number you write as an integer over an integer, the decimal digits will either be repeating zeros or a repeating pattern of some other kind. Students use practice, however, to gain access to the proprioceptive dimension of this instruction—the experience that this is indeed the case; a first-person view of the knowledge. It is not that they are not told why the digits repeat (there are a finite number of remainders that are linked together in what they produce) during the instruction with the example. They are told this. And it’s not necessarily the case that the students don’t understand what they have been told. It’s just that the first-person experience of this is an important node in the constellation of connections that constitute the schema of understanding rational numbers.

Indeed, I would argue that the explicit instruction is absolutely necessary in this example (and almost all other examples). It allows the student to connect his newly gained first-person proprioceptive conceptual knowledge about division and rational numbers with what he understands to be experts’ experience and see them as the same. In other words, the explicit instruction did not take away the student’s ability to discover something for himself, as the idiotic trope goes; on the contrary, it was necessary to facilitate the student’s discovery. Cristina says it well:

Cristina [the teacher in the example] did not consider it a crime to “tell secrets”—like the author, she believed that students have to work to figure them out anyway.

Making Connections

This perspective helps me make some intuitive sense of why, for example, retrieval practice may be so effective. Testing yourself gives you the first-person experience of—the proprioceptive sense of—what it is like to remember something successfully and consistently (and it ain’t as easy as you think it is before you do it). This knowing-what-it-is-like episodic knowledge is a knowing like any other, but it is one that can be easily neglected when dealing with cognitive skills and subject matter.

Or, rather than neglect proprioceptive knowledge, we tend to make it dramatically different from the conceptual, such that students don’t connect the two. It is not necessarily the case that our conceptual instruction is anemic (though it certainly is on occasion) but that our procedural instruction is so narrow, and when it is not narrow it is not procedural. As a result, students gain a strong sense of what it is like to find like denominators, for example, but little sense of what it is like to move around in “higher level” aspects of that topic. When it comes to those higher level aspects, we have done no better in education, really, than to just have students do things in order to learn things or just have students learn things in order to do things.

The episodic knowledge angle can also help make some sense out of the power of narrative, as this is a type of out-to-in instruction which contemporaneously solicits an in-to-out response. (Although, it seems to me to push too hard in the latter direction for an ideally holistic type of instruction.)

How to ‘Do’ Proprioceptive Knowledge

The above suggests some instructional moves. The paper itself suggests building this in-to-out proprioceptive knowledge during practice. Rather than relying on problems which hit only the procedural writing of fractions as terminating or repeating decimals, for example, practice can assist students in “discovering” the explicit conceptual instruction of the lesson from a brand-new, first-person perspective. So, if we want students to understand why rational numbers must have terminating or repeating decimal representations, we need to give them practice that explicitly allows them to ‘feel’ that connection and express it as they work.

For another example, I’m currently working on percents. I certainly want students to be able know what it’s like to determine a percent of a number, purely procedurally. But I also want them to understand a lot of other things: that p% of Q is greater than Q when p is greater than 100 (and why; because it’s multiplying Q by a number greater than 1) and less than Q when p is less than 100 (because it’s multiplying Q by a number less than 1); and so on. I should think about structuring my practice so that these patterns reveal themselves and explicitly point students to them.

## Do-It-Ourselves Education

When I lived for a year in Germany in high school as a foreign exchange student, I picked up, among many other things, a great quote from my host father: “Die Vorbereitung ist alles.” When said in my first language, it sounds fairly banal: “(The) preparation is everything.”

In both languages, though, the understood meaning of the phrase has a teleological ring: preparation is all-important for the goal you want to accomplish or some particular end you have in mind to achieve or realize. But I prefer a more extreme interpretation of the quote, in particular for education: that there is no achievement track, only a preparation track (multiple tracks in reality).

How do we get to that achievement above from, say, the middle of the preparation track? We don’t (in general). We move along the preparation track until we are in close enough proximity to the achievement to grab it. We don’t, in fact, keep our eyes on the prize. We keep our eyes on the preparation needed to move us within striking distance of the prize. Indeed, from way back in the middle of the track, the prize may look more tempting than it will appear close up (and it may be a mirage). And we may not be able to grasp it until we’re a little past it in our preparation.

Stay on the Right Track

The goal or achievement can be anything, really. So, for example, just cruise Twitter for a bit to find some quotable goal for education. The Feynman quote on the right is a good example. It is part of a quotation from a letter to a student in 1976, in which Feynman refers to himself in the third person, from The Quotable Feynman:

Just because Feynman says he is pro-nuclear power, isn’t any argument at all worth paying attention to because I can tell you (for I know) that Feynman doesn’t know what he is talking about when he speaks of such things. He knows about other things (maybe). Don’t pay attention to “authorities,” think for yourself.

Okay, great. Sincerely, that’s a great goal. I definitely would like to help students be appropriately mistrustful of authority—to the extent that it stimulates constructive thinking, not just having temper tantrums about authority. Who wouldn’t? So, let’s talk about distrusting authority as a goal for education.

The usefulness of the above interpretation about preparation is that now we must find the image of that goal somewhere along the preparation track and work out how we will connect the beginning and middle of the track to the point where that goal can be attained. Almost instantly we will see that we need to define what we (society) want for students. (For example, students have a lot of authority figures in their lives. Will they interpret the pro-skepticism message in a way that makes them start ignoring what their parents tell them? Does skepticism just mean that they have an ability to say, “I don’t think that’s right” and then never follow up?) But more importantly, we need to think about the steps along the path: What do students need to know first to understand skepticism and how to wield it appropriately? How does that ability progress over time? What knowledge is involved?

I don’t know about you, but when I deliberate on that simple Feynman quote for a while, I think of dozens of different sub-steps I would want to put in place along the preparation track from the goal back to the starting point. And these would probably break down into hundreds of smaller steps. Balancing appropriate skepticism—actionable skepticism, not armchair, consequence-free questioning—with the absolute necessity in modern life of trusting experts and authorities is lifelong work for adults who take on that challenge. If we want it to be an explicit goal for students—and not just a slogan we pass around on social media—then it will require a lot of work and technical planning.

My wish for 2018 and beyond is that, in addition to wanting these kinds of big things for students, we realize the hard, technical, scientific work involved in doing those things ourselves. Let’s leave behind the childish idea that, in order for students to achieve X, they just have to do X. That works for small things, not for anything worth having.

It is not science to know how to change centigrade to Fahrenheit. It’s necessary, but it is not exactly science. In the same sense, if you were discussing what art is, you wouldn’t say art is the knowledge of the fact that a 3-B pencil is softer than a 2-H pencil. It’s a distinct difference. That doesn’t mean an art teacher shouldn’t teach that, or that an artist gets along very well if he doesn’t know that.

–Richard Feynman

## Underlying, Deep, Critical?

Here’s a very reasonable statement, from this book, on techniques used by researchers to investigate conceptual knowledge of arithmetic:

The most commonly used method is to present children with arithmetic problems that are most easily and quickly solved if children have knowledge of the underlying concepts, principles, or relations. For example, if children understand that addition and subtraction are inversely related operations, even when presented with a problem, such as 354297 + 8638298 – 8638298, they should be able to quickly and accurately solve the problem by stating the first number. This approach is typically called the inversion shortcut.

Although, this borders on the problematic (for me at least). Why should ‘underlying’ be a prerequisite for calling something conceptual knowledge as opposed to plain old knowledge? Even the straightforward addition and subtraction here presumably requires knowing what to do with the numbers and symbols presented in this (likely) novel problem and thus involves conceptual knowledge of some kind.

Still, it makes some sense to distinguish between knowing how to add and subtract numbers and knowing that adding and then subtracting (or subtracting, then adding) the same number is the same as adding zero (or doing nothing). But the following just a few paragraphs later doesn’t make much sense to me:

The use of novel problems is important. Novel problems mean that children must spontaneously generate a new problem solving procedure or transfer a known procedure from a conceptually similar but superficially different problem. In this way, there is no possibility that children are using the rote application of a previously learned procedure. Application of such a rotely learned procedure would mean that children are not required to understand the concepts or principles being assessed in order to solve the problem successfully.

The biggest problem is that the concept of ‘conceptual knowledge’ of arithmetic laid out here relies on the fact that the “inversion shortcut” is not typically taught as a procedure. But it seems easily possible to train a group of students on the inversion shortcut and then sneak them into a research lab somewhere. After the experiment, the researcher would likely decide that all of the students had ‘conceptual knowledge’ of arithmetic, even though the subjects would be using the “rote application of a previously learned procedure”—something which contradicts the researcher’s own definition of ‘conceptual knowledge’. On a larger scale, instead of sneaking a group of trained kids into a lab, we could emphasize the concept of inversion in beginning arithmetic instruction in schools. If researchers were not ready for this, it would have the same contradictory effect as the smaller group of trained students. If the researchers were ready for it, then the inversion test would have to be thrown out, as they would be aware that inversion would be more or less learned and, thus (for some reason) not qualify as conceptual knowledge anymore.

Second, why should adding and subtracting the numbers from left to right count as an application of a rote procedure (which does not evidence conceptual knowledge) rather than as a transfer of a known procedure from a conceptually similar but superficially different problem (which does show evidence of conceptual knowledge)? The problem is novel and students would be transferring their knowledge of addition and subtraction procedures to a situation also involving addition and subtraction (conceptually similar) but with different numbers (superficially different).

Clearly I Don’t Get It

I still see the value of knowing the concept of inversion, as described above. A person who notices the numbers above and can solve the problem without calculating (by just stating the first number given) is, most other things being equal, at an advantage compared to someone who can do nothing else but start number crunching (it’s also possible to not notice the equal numbers because you’re tired, not because you lack some as-yet undefined ‘critical thinking’ skill). What constantly perplexes me is why people insist on making something like knowing the inversion shortcut so damned mysterious and awe-inspiring.

You can know how to number crunch. That’s good to know. You can also know how to notice equal numbers and that adding and then subtracting the same value is the same as adding 0. That’s another good thing to know. The latter is probably rarer, but that fact alone doesn’t make it a fundamentally different kind of knowledge than the former. It is almost certainly rarer in instruction than calculation directions, so it should be no surprise that students are weaker on it generally. Let’s work to make it not as rare. A good place to start would be to acknowledge that inversion is not some deep or critical knowledge; it’s just ordinary knowledge that some people don’t know or apply well.

Coda

The section in question concludes:

Other concepts, such as commutativity, that is if a + b = c then b + c = a, have been investigated, but as they have not received as much research attention it is more difficult to draw strong conclusions from them compared to the concepts of inversion and equivalence. Also, concepts, such as commutativity are usually explicitly taught to children so, unlike novel problems, such as inversion and equivalence problems, it is not clear whether children are applying their conceptual knowledge when solving these problems or applying a procedure that they were taught and the conceptual basis of which they may not understand.

But how does it show that ‘conceptual knowledge’ is applied when we test students on something they haven’t been taught (do not know)? Where is the knowledge in conceptual knowledge supposed to be coming from? As long as it’s not from the teacher, it must be ‘conceptual’?

## Just Some Data

I‘ve got nothing much lately. Here’s some data I’ve been playing with from the Department of Ed. It might take a second or two (or ten) to load.

These are data showing school-wide (all grade levels) state-assessment mathematics achievement for over 68,000 schools in the United States for the 14–15 school year. Each point represents a school, and each school’s location on the plot represents (x) the percent of male students at the school scoring at or above proficient and (y) the percent of female students at the school scoring at or above proficient.

You’ll notice some rectangularity to the data. This is due to the fact that many of the percent-proficient values were given as ranges. For each gender reported, I translated the data to the top value of the range. So, if a school reported 50–54 percent at or above proficient for females and 50–54 percent at or above proficient for males, that school would be placed at (54, 54).

Another noticeable feature of the plot is that it doesn’t look at all like there are over 68,000 points represented. This is because many of the values are stacked on top of each other. The lightest shade of blue that is present on the plot is the color of every data point, so if you’re seeing dark blue, you’re likely seeing 4 or 5 schools all at one location.

The data cut straight down the middle, as you might expect—perhaps much closer to the middle than might be expected. So, in general, the scores for males and females on state math assessments are very close. The regression line is $$\mathtt{y = 0.9396x + 3.98204}$$, which shows an almost indiscernible advantage for the boys across all these data.

The regression line shows us that the data create a prediction that, given a male percent proficient or above from 0% to about 65%, you would predict a better female performance. From 67% upward, that prediction is reversed. You can see from the data points that what seems to weigh the line downward is what you might call outlier male-female disparities at the top of the range.

## Building Systems

It’s always fun to build things that allow for (a) generative responses and (b) flexibility in responding. This “in action” video from our upcoming lesson app on systems of equations does those two things.

Students are asked to build a linear system with a given solution (generative), and there are an infinite number of ways of doing this (flexibility)!

We’re always looking for ways to incorporate generativity and flexibility in students’ work, along with the more typical stuff. It helps make the learning a little more interesting without dispensing with the rigor.

## Transfer and Forgetting

I discovered a paper recently whose title is probably more interesting than its content: Unstable Memories Create a High-Level Representation that Enables Learning Transfer. Quite a thought—that the instability of memory could be advantageous for transfer.

Researchers conducted two experiments, asking participants in the first experiment to learn a word list and then a motor task and in the second experiment a motor task and then a word list. There were three conditions within each experiment: (1) the word recall and motor task had the same structure (see the supplemental material for how ‘same structure’ was operationalized here), (2) the two tasks had different structures, and (3) the tasks had no determined structure.

It’s Not “Transfer”, It’s Domain Similarities

In the first experiment, participants first learned the word list and then their skill at the motor task was measured over three practice blocks. When the word list and motor task were of the same structure, participants did significantly better across the three motor-task practice blocks. Similarly, in the second experiment, after the motor skill was learned, participants who then practiced the word list with a similar structure to the motor task improved significantly more than participants in the other conditions. This improvement on an unrelated though similarly structured task was measured as transfer, and it occurred in both directions.

Somewhat surprisingly, however, this transfer of learning between word and motor tasks (or motor and word tasks) was correlated with a stronger decrease in performance on the original task, when participants were tested 12 hours later. That is, subjects who learned the word list and then successfully transferred that learning to the motor task (because the tasks were of similar structure) showed a sharper decline in their word list recall than subjects in other conditions. The same results appeared in the experiment where subjects first learned the motor task and then the word list.

At first blush, this seems obvious. The subjects who actually transferred their learning saw their learning on the original task displaced by the similarly structured and thus interfering second task. But when researchers inserted a 2-hour interval between the original task and the practice blocks, this decline disappeared—and the transfer learning was no longer present. Thus, it seems that not only the similar structure of the two tasks but also the weakness of the memory for the first task were both responsible for the effective transfer learning. The authors put it this way:

By being unstable, a newly acquired memory is susceptible to interference, which can impair its subsequent retention. What function this instability might serve has remained poorly understood. Here we show that (1) a memory must be unstable for learning to transfer to another memory task and (2) the information transferred is of the high-level or abstract properties of a memory task. We find that transfer from a memory task is correlated with its instability and that transfer is prevented when a memory is stabilized. Thus, an unstable memory is in privileged state: only when unstable can a memory communicate with and transfer knowledge to affect the acquisition of a subsequent memory.

Forgetting, Spacing, and Transfer

This is intriguing. In some sense, this reinforces results related to the spacing effect. Spacing causes forgetting which causes “unstable memories.” When learning is revisited after a period of forgetting, it finds this unstable memory in a “privileged state”: a state which allows it to strengthen the connections of the original learning.

But the above also suggests that extending learning for transfer to other situations or to other concepts may be done optimally in concert with spaced practice. In other words, the best time for transfer teaching might be after a space allowing for forgetting.

## Spacing and The Practice Meter

Without a doubt, students need to practice mathematics thoughtfully. Classroom instruction of any kind is not enough. Practicing not only helps to consolidate learning, but it can be a source of good extended instruction on a topic. And in recent years, research has uncovered—or rather re-uncovered—a very potent way to make that practice effective for long-term learning: spacing.

Dr. Robert Bjork here briefly describes the very long history and robustness of the research on the effectiveness of spacing practice:

It seems that not only is spaced practice more effective than so-called “massed” practice, but spaced learning is more effective than massed learning. A recent study by Chen, Castro-Alonso, Paas, and John Sweller, for example, provides some evidence that spaced learning is more effective than massed learning for long-term retention because spaced learning does not deplete working memory resources to the same extent as massed learning.

In one experiment, Sweller, et al. provided massed and spaced instruction on operations with negative numbers and solving equations with fractions to counterbalanced groups of 82 fourth grade students (from a primary school in Chengdu, China) in regular classroom settings. In both conditions, students were instructed using three worked example–problem-solving pairs. A worked example was studied and then a problem was attempted—for a total of three pairs (they were not presented together). In the massed condition, these pairs were given back to back, for 15 minutes. In the spaced condition, this same 15 minutes was spread out over 3 days.

In both conditions, a working memory test was administered immediately after the final worked example–problem-solving pair. And immediately following the working memory test, students were given a post-test on the material covered in the instruction. In the massed condition, this post-test occurred at the end of Day 1. In the spaced condition, the post-test occurred at the end of Day 4.

Students in the spaced condition scored significantly higher on the post-test than students in the massed condition. And there were some indications that working memory resource depletion had something to do with these results.

In the absence of…stored, previously acquired information, it was assumed that for any given individual, working memory capacity was essentially fixed. Based on the current data, that assumption is untenable. Working memory capacity can be variable depending not just on previous information stored via the information store, the borrowing and reorganizing, and the randomness as genesis principles, but also on working memory resource depletion due to cognitive effort.

Shorter, Smaller Chunks

Taken together, the research on the spacing effect for both practice and instruction suggests that both instruction and practice should happen in shorter, smaller chunks over time rather than packed all together in one session.

As an example of this, here is a video of a module from the lesson app Add and Subtract Negatives. The user runs through this very quickly (and correctly), skipping the video and worked examples on the left side and the student Notes—and a lot of other things that accompany the instructional tool—to demonstrate how the work of this module flows from beginning to end. The Practice Meter is shown in the center of the modules (and instructor notes) on the homepage as a circle with the Guzinta Math logo. If you want to skip most of the video, just forward to the end (2:11) to see how the Practice Meter on the homepage changes after completing a module.

You can see that the Practice Meter fills up to represent the percent of the lesson app a student has worked through (approximately 55% in the video). Although not shown in the video above, hovering over the logo on the homepage reveals this percent. The green color represents a percent between 25 and 80. Under 25%, the color is red, and above or at 80%, the color is blue.

Whether or not the lesson is used in initial instruction, the Practice Meter fades over time. Specifically, the decay function $$\mathtt{M(t) = C \cdot 0.75^t}$$ is used in the first week since either initial instruction or initial practice to calculate the Practice Meter level, where $$\mathtt{C}$$ represents the current level and $$\mathtt{t}$$ represents the time since the student last completed a module.

In our example above, during the first week after initial instruction or practice, the student’s Practice Meter level of 55 will drop into the red in about 3 days. If she returns to the app in 15 minutes to see a Practice Meter level of 54 and then raises that up to an 80 by completing the same module again or a different module (100 is max score at any time), then her Practice Meter level will drop to below 25 in about 4 days. If she raises it up to 100, then that will decay to below 25 in a little less than 5 days.

This fairly rapid decay rate applies only to the first week. After Day 7, and up until Day 28, the decay rate changes to $$\mathtt{M(t) = C \cdot 0.825^t}$$, whether the student practiced during that time or not. This provides some incentive for spacing out practice a little more over time. Mapping this onto our example above, an initial Practice Meter level of 55 would decay to below 25 in a little over 4 days. A level of 80 would decay to below 25 in a little over 6 days, and a level of 100 would take about 7 and a half days to go red.

There are also decay rates for 28–90 days and after 90 days. For more information, see this Practice Meter Info page, which comes with the instructor notes in every lesson app.

(Lack of) Implementation Notes

The design of the Practice Meter is such that, if a student does not use a lesson for spaced practice, he or she will feel no interruption in their use of it. And it is important to implement it in a way that does not create extra responsibilities for the student if they aren’t required by their teacher. But if students and parents or students and their teachers do want to implement spaced practice, it can be easy to check in on the Practice Meter every so often, asking students to, say, keep their Practice Meter levels above 25 or above 80—perhaps differentiating for some students to start—at regular check-in intervals.

As always, though, implementing shorter and smaller in both instruction and practice is much more difficult than reading about it in research, especially when current practice or one’s institutional culture may be focused on “more” and more massed instruction and practice. But conclusions about spacing drawn from research are not regal edicts. We can keep them in mind as ideas for better practice and work to implement the ideas in the small ways we can—and then eventually in big ways.

Update: The Learning Scientists’ Podcast features a brief discussion of lagged homework, which definitely connects to what I discuss above. Henri wrote up something about it a few years ago.