It has been now just two years since I reviewed Mr Barton’s stellar first book. I say “just,” in part because the last three weeks during this pandemic have felt like five years, and in part because Barton packs so much into his second book, it is a little surprising he did it in just two years.
The central theme of Reflect, Expect, Check, Explain is using and constructing ‘intelligent’ sequences of mathematics exercises, “providing opportunities to think mathematically.” The intelligence behind these sequences is the way we order and arrange them, allowing for comparison (reflection) between two or more exercises, the anticipation of what the answer or solution method will be (expectation) based on what the previous answer or solution method was, determination of the answer (check), and then an explanation of the connection between the exercises (explain).
Consider, for example, the sequence at left, from early in the book. During reflect, for the first pair of exercises, I can notice that the lower and upper bounds have stayed the same, and the second number line has minor ticks for every second minor tick of the first number line. I can also notice that the sought-after decimal value is at the same location on both number lines. This noticing can lead me to expect that since I identified the missing value for the first number line as 2.6, my answer should be the same for the second number line. It’s possible, though, that I won’t come up with an expectation. In the check phase, I fill in the values for the equal intervals on the second number line, coming up with the value for the question mark. Finally, when I explain, I either have a chance to talk about my earlier expectation and explain why I was off or why my expectation was correct or, if I couldn’t formulate an expectation, I can explain why the question-marked values are the same even though the tick marks are different.
As I move through the sequence, there are really interesting thoughts to have.
- Why did the question-marked values line up when moving from 10 to 5 equal intervals (between Questions 1 and 2) but not when moving from 5 to 4 equal intervals (between Questions 3 and 4)?
- Why does “lining up” fail me in Questions 4, 5, and 6 when it worked between Questions 1 and 2?
- I can’t rely on inspection every time to figure out the intervals. Is there something I can do to make that task simpler?
- Is the question-marked value in Question 9 just the question-marked value in Question 8, divided by 10?
- Can I extend my interval calculator method to decimals?
If this were the entire book, that would be enough for me, to be honest. But Mr Barton spends an exemplary amount of effort addressing possible questions and misconceptions about such sequences (the FAQ chapter is excellent) and explaining how these sequences can both fit into more extensive learning episodes and can function in different ways from practice. All the while, the sequences remain the stars of the show.
I highly recommend (again) Mr Barton’s book, especially to math teachers. He outlines in brilliant detail how you can turn a set of boring exercises into a powerful method for soliciting students’ mathematical thinking. No revolution required.
Below are just a few snips from the book that I added to my notebook while reading. These are not necessarily reflective of the entire argument. But after a long day of educhatter, which more often than not reads like an ancient scroll from some monist cult, it is comforting to read these thoughts and know that there is still a place for practical, technical, dispassionate thinking about teaching and learning in the 21st century—a place for waging the cerebral battle, rather than constantly leading with our chin or our hearts.
Teaching a method in isolation and practising it in isolation is important to develop confidence and competence with that method, and indeed, students can get pretty good pretty quickly. But if we do not then challenge them to decide when they should use that method – and crucially when they should not – we deny them the opportunity to identify the strategy needed to solve the problem.
There are two main arguments in favour of teaching a particular method before delving into why it works.
The path to flexible knowledge The key point that Willingham makes is that acquiring inflexible knowledge is a necessary step on the path to developing flexible knowledge. There is no short cut. The ‘why’ is conceptual and abstract. We understand concepts through examples. The ‘how’ generates our students’ experience of examples. In other words, often we have to do things several times to appreciate exactly how and why they work.
Motivation As Garon-Carrier et al. (2015) conclude, motivation is likely to be built on a foundation of success, and not the other way around.
The mistake I made for much of my career was trying to fast track my students to this [problem solving] stage. This was partly due to my obsession with differentiation – heaven forbid a child should be in their comfort zone for more than a few seconds – but also based on my belief that problem solving offered some sort of incredible 2-for-1 deal. I thought it would enable my students to practice the basics, whilst at the same time allowing them to develop that magic problem solving skill.
I will again quote John Mason: “It is the ways of thinking that are rich, not the task itself.”
Check out Barton’s online courses, which now includes a stellar course on Intelligent Practice.
I‘ve started a writing project recently that I’m having a good time working on so far. I’ve called it Scala Math (and on Twitter here) for now, because its central focus is deconstructing concepts and procedures into steps, and la scala is Italian for ‘staircase’. You can see the word at work in ‘escalator’, ‘scale’, etc. Scala is also the name of a programming language. Here are some reasons for that I found online.
Most of the projects I’ve worked on over the past few years have also been ways for me to learn new software languages or libraries. For Geometry Theorems, it was d3. For Scala, it was React—as well as the beautiful, amazing database that a normal person can actually look at and edit and it’s still a database: Airtable.
How It Works: Learn
Every Scala has a display window—where images and videos are shown—and a steps window, where you find the text of the steps, or ‘parts’. These areas are divided by a brain, which I’ll talk about below. When you land on a Scala (this one is Solving Arithmetic Sequences), the first thing shown in the display window is an image presenting a quick snippet of what will be covered. The image shows an essential question at the top. The use-case for the snippet was a student wanting a quick reminder about something they are working on, perhaps for homework, without having to search online and wade through tons of stuff that sorta-kinda matches what you want but not really.
The remainder of the section shown at left (called ‘Learn’ mode) is a series of steps (in this case, six), explained with text, audio narration, and the accompanying images that you can see appearing when clicking on each step. The dot navigation at the top shows us that we are on the first screen of this Scala.
Each step card has a button to replay the step, which can be pressed at any time while the step is active, and a button (up arrow) to go to the preceding step.
How It Works: Reflect
As you can see at the end of the video above, there is a Reflection question which calls for a short or extended text response. This is where the audio input on my cell phone comes in handy. Students’ responses are, at the moment, compared to a few ‘correct’ responses that I have written, and others have conributed to. The response which has the highest numerical match on a scale from 0 to 100 is presented as your score, and the pre-written response is presented as a suggested answer.
How It Works: Try
After the Learn phase is the Try phase, which consists of example-problem pairs (usually; for a very few cases, so far, stepped-out problems only). Or, more specifically, stepped-out problems followed by not-stepped-out problems. These look a little different from what I typically see as example-problem pairs, where the example and the problem are set side by side. Here, the problem follows the example, and the example is not provided when solving the problem. The typical sequence is shown below.
For the Try and Test phases, it’s always multiple choice, although it’s in the plan to look at other response inputs. When students are logged in, they build up (not earn; see below) points for every question. Right now, it’s just 50 points for each, though that gets cut in half and rounded up to the nearest integer for every incorrect answer. For an item with 3 choices, the lowest point total possible is 13. For an item with 4 choices, the lowest is 7.
On desktop, students can have the question read aloud via text-to-speech. As far as I know, that hasn’t yet come to mobile as a built-in feature, but I’ll keep my ears open for when it does.
How It Works: Test
Finally, there’s the Test phase. This is typically 4 to 6 questions that are of the same form as the ‘problems’ in the example-problem-pair Try phase. I’m just showing one such question in the video at the right.
When students are logged in, they can earn points by taking the test. The points are built up in both the Learn and Try phases. I have described how the points work for the Try phase above. The Learn phase is simpler: just clicking on a step builds up 100 points. At the moment, no points are tied to the Reflect question.
Once a student reaches the Test phase, the greatest number of points he or she can ‘bank’ is the number he or she has built up over the course of the Learn and Try phases. And the Test phase is fairly high stakes, in that each incorrect answer divides the total possible points to earn in half.
The stars shown on the score modal are awarded based on percent of total points earned. For the lesson shown in this post, the total that can be earned is 1700. So, approximately 560 points is 1 star (33%), 1130 points is 2 stars (66%), and 1360 points is 3 stars (80%).
Finally, to make sure this product connects knowledgeable people with students (whether they be parents or teachers or both) and guards against mindlessly pressing buttons to earn points, there is a final front-and-back activity, wherein students solve a different problem by listing the steps themselves and showing all their work.
A smart defense of any argument for less teacher-directed instruction in mathematics classrooms is to point to the logical connectedness of mathematics as a body of knowledge and suggest that students are capable of crossing many if not all of the logical bridges between propositions themselves, or with minimal guidance.
Such connectedness–it can be suggested–makes mathematics somewhat different from other school subjects. For example, given a student’s conceptual understanding of a fraction as a part-to-whole ratio, which can include his or her ability to represent a fraction with a visual or physical model, it seems to follow logically that he or she can then add two fractions and get the correct sum, so long as the student knows (intuitively or more formally) that addition is about combining values linearly. It doesn’t matter how many prerequisites there are for adding fractions. The suggestion is that once those prerequisites have been met, it is a matter of merely crossing a logical bridge to adding fractions (mostly) correctly.
By way of contrast, a student can’t really induce what happened after, say, the bombing of Pearl Harbor. They have to be informed about it directly. The effects can certainly be narrowed down using common sense reasoning and other domain-specific knowledge. But, ultimately, what happened happened and there is no reason to suspect that, in general, students can make their way through a study of history mostly blindfolded, relying only on logic and common sense.
The example of history brings up an interesting point (to me, anyway) about the example of mathematics, though. Historical consequences from historical causes can be dubbed “inevitable” only after the fact. How can we be sure it is not the same when learning anything, including mathematics? Once you know, conceptually as it were, what adding fractions is, of course it seems to be a purely logical consequence of what fractions are fundamentally. But is this seeming inevitability available to the novice, the learner who is aware of what fractions are but hasn’t ever thought about adding them? With the average novice is, after all, where that feeling of logical inevitability has to lie. It is not enough for educated adults to think of something as ‘logical’ after they already know it.
Bertrand Russell argues, in a 1907 essay, that even in mathematics we don’t proceed from premises to conclusions, but rather the other way around.
We tend to believe the premises because we can see that their consequences are true, instead of believing the consequences because we know the premises to be true. But the inferring of premises from consequences is the essence of induction [abduction]; thus the method in investigating the principles of mathematics is really an inductive method, and is substantially the same as the method of discovering in any other science.
So, how can we decide whether some bridge in reasoning is available to and crossable by the average novice? I hope it’s clear that we can’t just figure it out via anecdotes and armchair reasoning. Our intuitions can’t be trusted with this question. And our opinions one way or the other on the matter are not helpful, no matter what they are.
A really nice thing about scientific research is its transparency. Researchers write down the methods they use in their experiments—sometimes in excruciating detail—so that others can try to replicate their work if they choose. And scrutinizable methods allow us and other researchers to think about issues that the original experimenters might have overlooked—or, at least, didn’t mention in their published work.
Every once in a while we come across research which individuals themselves can simulate at home on a computer, even if they don’t have any participants, and this allows us to bring the experiment to life a little more than can be done with text descriptions.
The research I look at in this post is such a study. Students in the study (81 in all, from 7 to 10 years of age) were given an “app” very similar to the one shown below. Play with it a bit by clicking on the animal pictures to see what students were exposed to in this study.
In this study, students were presented with a question and then an explanation answering that question for the 12 animals shown above (images used in the study were different from above). Students rated the quality of explanations about animal biology on a 5-point scale. (In the version above, your ratings are not recorded. You can just click on the image of the rating system to move on.) The audio recorded in the app above use the questions and explanations from the study verbatim, though in the actual study two different people speak the questions and explanations (above, it’s just me).
As you could no doubt tell if you played around with the app above, some of the explanations are laughably bad. Researchers designated these as circular explanations (e.g., How do colugos use their skin flaps to travel? Their skin flaps help them to move from one place to another). The other, better explanations were identified as mechanistic explanations (e.g., How do thorny dragons use the grooves between their thorns to help them drink water? Their grooves collect water and send the water to their mouths). After rating the explanation, students were then given a choice to either get more information about the animal or to move on to a different animal. Here again, all you get is a screen to click on, and any click takes you back to the main screen with the 12 animals. In the actual study, students were given an even more detailed mechanistic explanation when clicking to get more information (e.g., Thorny dragons have grooves between their thorns, which are able to collect water. The water is drawn from groove to groove until it reaches their mouths, so they can suck water from all over their bodies).
The Curious Case of Curiosity
What the researchers found was that, in general, students were significantly more likely to click to get more information on an animal when the explanation given was circular. And, importantly, students were more likely to click to get more information when they rated the explanation as poor. This behavior—of clicking to get more information—was operationalized as curiosity and can be explained using the deprivation theory of curiosity.
In everyday life, children sometimes receive weak explanations in response to their questions. But what do children do when they receive weak explanations? According to the deprivation theory of curiosity, if children think that an explanation is unsatisfying, then they should sometimes feel inclined to seek out a better answer to their question to bolster their knowledge; the same is not true for explanations appraised as high in quality. To our knowledge, our research is the ﬁrst to investigate this theory in regards to children’s science learning, examining whether 7- to 10-year-olds are more likely to seek out additional information in response to weak explanation than informative ones in the domain of biology.
But is that really curiosity? Do I stimulate your curiosity about colugos’ skin flaps by not really answering your questions about them? We can more easily answer no to this question if we assume that Square 1 represents students’ wanting to know something about colugos’ skin flaps. In that case, the initial question stimulates curiosity, as it were, and the non-explanation simply fails to satisfy this curiosity, or initial desire for knowledge. The circular explanation has not made them curious or even more curious. They were already curious. Not helping them scratch that itch just fails to move them to Square 2, which is where they wanted to go after hearing the question (knowing something about how colugos’ skin flaps work). The fact that students with unscratched itches were more likely to go to Square 3 is not surprising, since Square 3, for them, was actually Square 2, the square that everyone wanted to get to.
An Unavoidable Byproduct of Quality Teaching
If you are more inclined to believe the above interpretation, as I am, it might seem that we still must contend with the evidence that quality explanations were indeed shown to reduce information-seeking, relative to the levels of information-seeking shown for circular explanations. But this is not necessarily the case. What we see, from this study at least, is that not scratching the initial itch likely caused a different behavior in students than did scratching it. A clicking behavior did increase for students who still had itches, but this does not mean that it decreased for students who had no itch. We have evidence here that bad explanations are recognizably bad. We do not have evidence suggesting that quality explanations make students incurious.
If this is the case, though—if quality explanations reduce curiosity—it seems likely to me that it is simply an unavoidable byproduct of quality teaching. One that can be anticipated and planned for. Explanations are, after all, designed to reduce curiosity, in some sense. What high quality explanations do—in every scientific field and likely in our everyday lives—is move us on to different, better things to be curious about.