Imitation and the Ratchet Effect

a bunch of gears

Comparative psychologist Michael Tomasello, in his 1999 book The Cultural Origins of Human Cognition, popularized the now widely adopted metaphor of the “ratchet effect” in human cultural evolution:

Basically none of the most complex human artifacts or social practices—including tool industries, symbolic communication, and social institutions—were invented once and for all at a single moment by any one individual or group of individuals. Rather, what happened was that some individual or group of individuals first invented a primitive version of the artifact or practice, and then some later user or users made a modification, an “improvement,” that others then adopted perhaps without change for many generations, at which point some other individual or group of individuals made another modification, which was then learned and used by others, and so on over historical time in what has sometimes been dubbed “the ratchet effect” (Tomasello, Kruger, and Ratner, 1993). The process of cumulative cultural evolution requires not only creative invention but also, and just as importantly, faithful social transmission that can work as a ratchet to prevent slippage backward—so that the newly invented artifact or practice preserves its new and improved form at least somewhat faithfully until a further modification or improvement comes along.

But the ratchet effect presents us with a bit of a puzzle for children’s learning—or how we typically think about that learning. One can imagine, for example, a first-generation technology for dividing resources into fair shares where rocks are used as symbols and moved around into equal groups. Future generations learn this technique and then gradually innovate on it by—again, for example—recognizing that one can divide 18 into fair shares by first dividing the ten items into equal groups and then dividing the 8 into the same number of equal groups, rather than taking and moving around all 18 at once.

Even at this stage the challenge of explaining to a new generation of children why one can do this should seem more daunting than explaining the first-generation method. But now throw on top all of the cumulative innovations we can imagine here for analog division across thousands of generations: rocks are eventually replaced by written symbols, contexts where the division process applies proliferate and become more abstract, and a technology is eventually developed (long division) that allows a user to mechanistically divide any number into just about any other without needing to think about the context at all.

All of these developments are positive (or neutral) cultural innovations. But the learner in the one-thousandth generation is not neurologically all that different from the child in the first generation watching rocks being moved around. Yet, the more modern student is asked to learn a much more causally opaque process—one that has been refined over millennia, which the child was obviously not there to witness, and one whose moving parts are not intuitively related to a goal. It is much simpler for a child just arriving on the scene to intuit the goal of a tribal elder who is separating 105 beads into 3 equal groups than it is for a very similar and similarly-situated modern child to understand the goal of the seemingly random number scrawling associated with long division.

So, the puzzle is this: If the process of cumulative cultural evolution has continued to ratchet over time, how has it been maintained over tens of thousands of years when each new generation starts out marginally further from the goal of understanding any given beneficial technology? For the example of division above, we can point to instructional techniques that actually do start with separating rocks (or counters) into equal groups and building up to the more abstract long division algorithm. But this suite of techniques is already a relic. Digital computing has thoroughly taken over this work, and it’s probably safe to say that very few people (adults and children) really know how it works.

If long division is not a salient example for you, you can relate to the feeling of being an ignorant stranger to your own species’ cultural achievements by asking yourself how much you really understand about how toilets work, how cars work, and on and on. Or consider one of the many gruesome examples—described by Joseph Henrich in his book The Secret of Our Success—of what happens when otherwise intelligent and strong people find themselves outside the protections of relevant cultural understandings:

In June 1845 the HMS Erebus and the HMS Terror, both under the command of Sir John Franklin, sailed away from the British Isles in search of the fabled Northwest Passage, a sea channel that could energize trade by connecting western Europe to East Asia. This was the Apollo mission of the mid-nineteenth century, as the British raced the Russians for control of the Canadian Arctic and to complete a global map of terrestrial magnetism. The British admiralty outfitted Franklin, an experienced naval officer who had faced Arctic challenges before, with two field-tested, reinforced ice-breaking ships equipped with state-of-the-art steam engines, retractable screw propellers, and detachable rudders. With cork insulation, coal-fired internal heating, desalinators, five years of provisions, including tens of thousands of cans of food (canning was a new technology), and a twelve-hundred-volume library, these ships were carefully prepared to explore the icy north and endure long Arctic winters.

As expected, the expedition’s first season of exploration ended when the sea ice inevitably locked them in for the winter around Devon and Beechney Islands, 600 miles north of the Arctic Circle. After a successful ten-month stay, the seas opened and the expedition moved south to explore the seaways near King William Island, where in September they again found themselves locked in by ice. This time, however, as the next summer approached, it soon became clear that the ice was not retreating and that they’d remain imprisoned for another year. Franklin promptly died, leaving his crew to face the coming year in the pack ice with dwindling supplies of food and coal (heat). In April 1848, after nineteen months on the ice, the second-in-command, an experienced Arctic officer named Crozier, ordered the 105 men to abandon ship and set up camp on King William Island.

The details of what happened next are not completely known, but what is clear is that everyone gradually died. . . .

King William Island lies at the heart of Netsilik territory, an Inuit population that spent its winters out on the pack ice and their summers on the island, just like Franklin’s men. In the winter, they lived in snow houses and hunted seals using harpoons. In the summer, they lived in tents, hunted caribou, musk ox, and birds using complex compound bows and kayaks, and speared salmon using leisters. The Netsilik name for the main harbor on King William Island is Uqsuqtuuq, which means “lots of fat” (seal fat). For the Netsilik, this island is rich in resources for food, clothing, shelter, and tool-making (e.g., drift wood).

It’s Not the Innovation

What can explain the rapid progress in cumulative cultural achievements in our species (and no others, to the same extent) when each new generation must in many ways “catch up” to the ratcheted accomplishments of the previous ones? Let’s start with what the answer cannot possibly be. Tomasello again:

Perhaps surprisingly, for many animal species it is not the creative component, but rather the stabilizing ratchet component, that is the difficult feat. Thus, many nonhuman primate individuals regularly produce intelligent behavioral innovations and novelties, but then their groupmates do not engage in the kinds of social learning that would enable, over time, the cultural ratchet to do its work (Kummer and Goodall, 1985).

Similarly, Franklin’s men did not turn to cannibalism and eventually succumb to the elements because they lacked creativity or innovation or could not think outside the box.

The reason Franklin’s men could not survive is that humans don’t adapt to novel environments the way other animals do or by using our individual intelligence. None of the 105 big brains figured out how to use driftwood, which was available on King William Island’s west coast where they camped, to make the recurve composite bows, which the Inuit used when stalking caribou. They further lacked the vast body of cultural know-how about building snow houses, creating fresh water, hunting seals, making kayaks, spearing salmon and tailoring cold-weather clothing.

Innovation, by itself, gets us nowhere. The notion that our culture progresses because our species is endowed with big innovative brains (and we just need to unlock that potential) is nonsense in light of what we know about cultural evolution. In reality, what best explains the ratchet effect is a lot of imitation (solving the more difficult problem of storing and transmitting cultural knowledge) and a little bit of innovation (solving the problem of occasionally generating novel ideas, spread by imitation).

It’s the Imitation

The Inuit that can survive and thrive in an environment that killed all of Franklin’s men do so because, like Franklin’s men and like us, they are good imitators within their own cultures (and not very good innovators on average). All of us imitate valuable cultural knowledge without completely understanding what we’re doing. We need this skill precisely because of the ratchet effect. It is simply not possible, in general, to personally innovate solutions that can rival the effectiveness of those built up over thousands of generations, and it is similarly impossible to conceptually understand everything in the world before we need to use it. Thus, we imitate first and understand later. Indeed, “understandings” (or, answers to “why” questions) are imitated just as readily as answers to “how” questions, and can be equally causally opaque. If asked by a child why we don’t fly off into space when we jump, your answer would involve copying an understanding—an understanding not of your own devising—about gravity. And you don’t know what gravity is because no one does.

Lest you think (despite the story about Sir John Franklin) that causal opacity and rapid ratcheting is just a puzzle for tech-rich, conventionally educated, Western cultures in developed countries, here’s Henrich again:

Let’s briefly consider just a few of the Inuit cultural adaptations that you would need to figure out to survive on King William Island. To hunt seals, you first have to find their breathing holes in the ice. It’s important that the area around the hole be snow covered—otherwise the seals will hear you and vanish. You then open the hole, smell it to verify that it’s still in use (what do seals smell like?), and then assess the shape of the hole using a special curved piece of caribou antler. The hole is then covered with snow, save for a small gap at the top that is capped with a down indicator. If the seal enters the hole, the indicator moves, and you must blindly plunge your harpoon into the hole using all your weight. Your harpoon should be about 1.5 meters (5 ft) long, with a detachable tip that is tethered with a heavy braid of sinew line. You can get the antler from the previously noted caribou, which you brought down with your driftwood bow. The rear spike of the harpoon is made of extra-hard polar bear bone (yes, you also need to know how to kill polar bears; best to catch them napping in their dens). Once you’ve plunged your harpoon’s head into the seal, you’re then in a wrestling match as you reel him in, onto the ice, where you can finish him off with the aforementioned bear-bone spike.

Another reason to believe that imitation is (most of) the secret sauce for cultural evolution is that imitation shows up very early and robustly in development. In fact, children engage in what is called overimitation—imitating actions performed by a model even when those actions are obviously causally irrelevant to achieving the model’s goal. Other primates don’t do this. Legare and Nielsen explain this counterintuitive finding from research:

Why faithfully copy all of the actions of a demonstrator, even those that are obviously irrelevant? Given the potentially overwhelming number of objects, tools, and artifacts children must learn to use, it is useful to replicate the entire suite of actions used by an expert when first learning how to do something. Some propose that overimitation is an adaptive human strategy facilitating more rapid social learning of instrumental skills than would be possible if copying required a full representation of the causal structure of an event.

Conclusion

There are many takeaways and elaborations that come to mind in light of the above—all of which I’m still sussing out. One important takeaway worth mentioning, I think, is that, because humans have had culture for possibly hundreds of thousands of years, it is not out of the question that we have undergone some psychological adaptations that allow us to, most importantly, store and transmit and, less importantly, innovate on, valuable prefabricated solutions in our cultural groups.

Is it possible that the ratchet effect can help explain a foundational concept in Cognitive Load Theory: that our working memories (our innovation engines) are severely limited while our long-term memories (our imitation engines) are functionally infinite?

The other takeaway comes from Paul Harris, in the last paragraph of his book Trusting What You’re Told: How Children Learn from Others, which follows many of the same themes elaborated above, specifically from the child development angle. It is a takeaway worth taking away, especially for those in education who believe, without question or doubt, that children should be thought of as “little scientists”:

The classic method in social anthropology is not the scientific method in the way that experimental scientists conceive of it. It includes no experiments or control groups. Instead, when anthropologists want to understand a new culture, they immerse themselves in the language, learn from participant observation, and rely on trusted informants. Of course, this method has an ancient pedigree. Human children have successfully used it for millennia across innumerable cultures. Indeed, judging by their methods and their talents, we would do well to think of children not as scientists, but as anthropologists.

Birds and Worms

research

Pay attention to your thought process and how you use expert knowledge as you answer the question below. How do you think very young students would think about it?

Here are some birds and here are some worms. How many more birds than worms are there?

Hudson (1983) found that, among a small group of first-grade children (mean age of 7.0), just 64% completed this type of task correctly. However, when the task was rephrased as follows, all of the students answered correctly.

Here are some birds and here are some worms. Suppose the birds all race over, and each one tries to get a worm. Will every bird get a worm? How many birds won’t get a worm?

This is consistent with adults’ intuitions about the two tasks as well.

Interpret the Results

Still, what can we say about these results? Is it the case that 100% of the students used “their knowledge of correspondence to determine exact numerical differences between disjoint sets”? That is how Hudson describes students’ unanimous success in the second task. The idea seems to be that the knowledge exists; it’s just that a certain magical turn of phrase unlocks and releases this otherwise submerged expertise.

But that expert knowledge is given in the second task: “each one tries to get a worm.” The question paints the picture of one-to-one correspondence, and gives away the procedure to use to determine the difference. So, “their knowledge” is a bit of a stretch, and “used their knowledge” is even more of a stretch, since the task not only sets up a structure but animates its moving parts as well (“suppose the birds all race over”).

Further, questions about whether or not students are using knowledge they possess raise questions about whether or not students are, in fact, determining “exact numerical differences between disjoint sets.” On the contrary, it can be argued that students are simply watching almost all of a movie in their heads (a mental simulation)—a movie for which we have provided the screenplay—and then telling us how it ends (spoiler: 2 birds don’t get a worm). The deeper equivalence between the solution “2” and the response “2” to the question “How many birds won’t get a worm?” is evident only to a knowledgeable onlooker.

Experiment 3

Hudson anticipates some of the skepticism on display above when he introduces the third and last experiment in the series.

It might be argued, success in the Won’t Get task does not require a deep level of mathematical understanding; the children could have obtained the exact numerical differences by mimicking by rote the actions described by the problem context . . . In order to determine more fully the level of children’s understanding of correspondences and numerical differences, a third experiment was carried out that permitted a detailed analysis of children’s strategies for establishing correspondences between disjoint sets.

The wording in the Numerical Differences task of this third experiment, however, did not change. The “won’t get” locutions were still used. Yet, in this experiment, when paying attention to students’ strategies, Hudson observed that most children did not mentally simulate in the way directly suggested by the wording (pairing up the items in a one-to-one correspondence).

This does not defeat the complaint above, though. The fact that a text does not effectively compel the use of a procedure does not mean that it is not the primary influence on correct answers. It still seems more likely than not that participants who failed the “how many more” task simply didn’t have stable, abstract, transferable notions about mathematical difference. And the reformulation represented by the “won’t get” task influenced students to provide a response that was correct.

But this was a correct response to a different question. As adults with expert knowledge, we see the logical and mathematical similarities between the “how many more” and “won’t get” situations, and, thus we are easily fooled into believing that applying skills and knowledge in one task is equivalent to doing so in the other.

Teaching and Learning Coevolved?

Just a few pages in to David Didau and Nick Rose’s new book What Every Teacher Needs to Know About Psychology, and I’ve already come across what is, for me, a new thought—that teaching ability and learning ability coevolved:

Strauss, Ziv, and Stein (2002) . . . point to the fact that the ability to teach arises spontaneously at an early age without any apparent instruction and that it is common to all human cultures as evidence that it is an innate ability. Essentially, they suggest that despite its complexity, teaching is a natural cognition that evolved alongside our ability to learn.

Or perhaps this is, even for me, an old thought, but just unpopular enough—and for long enough—to seem like a brand new thought. Perhaps after years of exposure to the characterization of teaching as an anti-natural object—a smoky, rusty gearbox of torture techniques designed to break students’ wills and control their behavior—I have simply come to accept that it is true, and have forgotten that I had done so.

Strauss, et. al, however, provide some evidence in their research that it is not true. Very young children engage in teaching behavior before formal schooling by relying on a naturally developing ability to understand the minds of others, known as theory of mind (ToM).

Kruger and Tomasello (1996) postulated that defining teaching in terms of its intention—to cause learning, suggests that teaching is linked to theory of mind, i.e., that teaching relies on the human ability to understand the other’s mind. Olson and Bruner (1996) also identified theoretical links between theory of mind and teaching. They suggested that teaching is possible only when a lack of knowledge can be recognized and that the goal of teaching then is to enhance the learner’s knowledge. Thus, a theory of mind definition of teaching should refer to both the intentionality involved in teaching and the knowledge component, as follows: teaching is an intentional activity that is pursued in order to increase the knowledge (or understanding) of another who lacks knowledge, has partial knowledge or possesses a false belief.

The Experiment

One hundred children were separated into 50 pairs—25 pairs with a mean age of 3.5 and 25 with a mean age of 5.5. Twenty-five of the 50 children in each age group served as test subjects (teachers); the other 25 were learners. The teachers completed three groups of tasks before teaching, the first of which (1) involved two classic false-belief tasks. If you are not familiar with these kinds of tasks, the video at right should serve as a delightfully creepy precis—from what appears to be the late 70s, when every single instructional video on Earth was made. The second and third groups of tasks probed participants’ understanding that (2) a knowledge gap between teacher and learner must exist for “teaching” to occur and (3) a false belief about this knowledge gap is possible.

Finally, children participated in the teaching task by teaching the learners how to play a board game. The teacher-children were, naturally, taught how to play the game prior to their own teaching, and they were allowed to play the game with the experimenter until they demonstrated some proficiency. The teacher-learner pair was then left alone, “with no further encouragement or instructions.”

The Results

Consistent with the results from prior false-belief studies, there were significant differences between the 3- and 5-year-olds in Tasks (1) and (3) above, both of which relied on false-belief mechanisms. In Task (3), when participants were told, for example, that a teacher thought a child knew how to read when in fact he didn’t, 3-year-olds were much more likely to say that the teacher would still teach the child. Five-year-olds, on the other hand, were more likely to recognize the teacher’s false belief and say that he or she would not teach the child.

Intriguingly, however, the development of a theory of mind does not seem necessary to either recognizing the need for a special type of discourse called “teaching” or to teaching ability itself—only to a refinement of teaching strategies. Task (2), in which participants were asked, for instance, whether a teacher would teach someone who knew something or someone who didn’t, showed no significant differences between 3- and 5-year-olds in the study. But the groups were significantly different in the strategies they employed during teaching.

Three-year-olds have some understanding of teaching. They understand that in order to determine the need for teaching as well as the target learner, there is a need to recognize a difference in knowledge between (at least) two people . . . Recognition of the learner’s lack of knowledge seems to be a necessary prerequisite for any attempt to teach. Thus, 3-year-olds who identify a peer who doesn’t know [how] to play a game will attempt to teach the peer. However, they will differ from 5-year-olds in their teaching strategies, reflecting the further change in ToM and understanding of teaching that occurs between the ages of 3 and 5 years.

Coevolution of Teaching and Learning

The study here dealt with the innateness of teaching ability and sensibilities but not with whether teaching and learning coevolved, which it mentions at the beginning and then leaves behind.

It is an interesting question, however. Discussions in education are increasingly focused on “how students learn,” and it seems to be widely accepted that teaching should adjust itself to what we discover about this. But if teaching is as natural a human faculty as learning—and coevolved alongside it—then this may be only half the story. How students (naturally) learn might be caused, in part, by how teachers (naturally) teach, and vice versa. And learners perhaps should be asked to adjust to what we learn about how we teach as much as the other way around.

Those seem like new thoughts to me. But they’re probably not.

Providing Bad Intel

research

A really nice thing about scientific research is its transparency. Researchers write down the methods they use in their experiments—sometimes in excruciating detail—so that others can try to replicate their work if they choose. And scrutinizable methods allow us and other researchers to think about issues that the original experimenters might have overlooked—or, at least, didn’t mention in their published work.

Every once in a while we come across research which individuals themselves can simulate at home on a computer, even if they don’t have any participants, and this allows us to bring the experiment to life a little more than can be done with text descriptions.

The research I look at in this post is such a study. Students in the study (81 in all, from 7 to 10 years of age) were given an “app” very similar to the one shown below. Play with it a bit by clicking on the animal pictures to see what students were exposed to in this study.

The Method

In this study, students were presented with a question and then an explanation answering that question for the 12 animals shown above (images used in the study were different from above). Students rated the quality of explanations about animal biology on a 5-point scale. (In the version above, your ratings are not recorded. You can just click on the image of the rating system to move on.) The audio recorded in the app above use the questions and explanations from the study verbatim, though in the actual study two different people speak the questions and explanations (above, it’s just me).

As you could no doubt tell if you played around with the app above, some of the explanations are laughably bad. Researchers designated these as circular explanations (e.g., How do colugos use their skin flaps to travel? Their skin flaps help them to move from one place to another). The other, better explanations were identified as mechanistic explanations (e.g., How do thorny dragons use the grooves between their thorns to help them drink water? Their grooves collect water and send the water to their mouths). After rating the explanation, students were then given a choice to either get more information about the animal or to move on to a different animal. Here again, all you get is a screen to click on, and any click takes you back to the main screen with the 12 animals. In the actual study, students were given an even more detailed mechanistic explanation when clicking to get more information (e.g., Thorny dragons have grooves between their thorns, which are able to collect water. The water is drawn from groove to groove until it reaches their mouths, so they can suck water from all over their bodies).

The Curious Case of Curiosity

What the researchers found was that, in general, students were significantly more likely to click to get more information on an animal when the explanation given was circular. And, importantly, students were more likely to click to get more information when they rated the explanation as poor. This behavior—of clicking to get more information—was operationalized as curiosity and can be explained using the deprivation theory of curiosity.

In everyday life, children sometimes receive weak explanations in response to their questions. But what do children do when they receive weak explanations? According to the deprivation theory of curiosity, if children think that an explanation is unsatisfying, then they should sometimes feel inclined to seek out a better answer to their question to bolster their knowledge; the same is not true for explanations appraised as high in quality. To our knowledge, our research is the first to investigate this theory in regards to children’s science learning, examining whether 7- to 10-year-olds are more likely to seek out additional information in response to weak explanation than informative ones in the domain of biology.

But is that really curiosity? Do I stimulate your curiosity about colugos’ skin flaps by not really answering your questions about them? We can more easily answer no to this question if we assume that Square 1 represents students’ wanting to know something about colugos’ skin flaps. In that case, the initial question stimulates curiosity, as it were, and the non-explanation simply fails to satisfy this curiosity, or initial desire for knowledge. The circular explanation has not made them curious or even more curious. They were already curious. Not helping them scratch that itch just fails to move them to Square 2, which is where they wanted to go after hearing the question (knowing something about how colugos’ skin flaps work). The fact that students with unscratched itches were more likely to go to Square 3 is not surprising, since Square 3, for them, was actually Square 2, the square that everyone wanted to get to.

An Unavoidable Byproduct of Quality Teaching

If you are more inclined to believe the above interpretation, as I am, it might seem that we still must contend with the evidence that quality explanations were indeed shown to reduce information-seeking, relative to the levels of information-seeking shown for circular explanations. But this is not necessarily the case. What we see, from this study at least, is that not scratching the initial itch likely caused a different behavior in students than did scratching it. A clicking behavior did increase for students who still had itches, but this does not mean that it decreased for students who had no itch. We have evidence here that bad explanations are recognizably bad. We do not have evidence suggesting that quality explanations make students incurious.

If this is the case, though—if quality explanations reduce curiosity—it seems likely to me that it is simply an unavoidable byproduct of quality teaching. One that can be anticipated and planned for. Explanations are, after all, designed to reduce curiosity, in some sense. What high quality explanations do—in every scientific field and likely in our everyday lives—is move us on to different, better things to be curious about.


Explicitation

research

I came across this case study recently that I managed to like a little. It focuses on an analysis of a Singapore teacher’s practice of making things explicit in his classroom. Specifically, the paper outlines three ways the teacher engages in explicitation (as the authors call it): (1) making ideas in the base materials (i.e., textbook) explicit in the lesson plan, (2) making ideas within the plan of the unit more explicit, and (3) making ideas explicit in the enactment of teaching the unit(s). These parts are shown in the diagram below, which I have redrawn, with minor modifications, from the paper.

The teacher interviewed for this case study, “Teck Kim,” taught math to Year 11 (10th grade) students in the “Normal (Academic)” track, and the work focus of the case study was on a unit the teacher called “Vectors in Two Dimensions.”

Explicit From

The first category of explicitation, Explicit From, involves using base materials such as a textbook as a starting point and adapting these materials to make more explicit what it is the teacher wants students to learn. The paper provides an illustration of some of the textbook content related to explaining column vectors, along with Kim’s adaptation. I have again redrawn below what was provided in the paper. Here I also made minor modifications to the layout of the textbook example and one small change to fix a possible translation error (or typo) in the teacher’s example. The textbook content is on the left, and the teacher’s is on the right (if it wasn’t painfully obvious).

There are many interesting things to notice about the teacher’s adaptation. Most obviously, it is much simpler than the textbook’s explanation. This is due, in part, to the adaptation’s leaving magnitude unexplained during the presentation and instead asking a leading question about it.

The textbook presented the process of calculating the magnitudes of the given vectors, leading to a ‘formula’ of \(\mathtt{\sqrt{x^2+y^2}}\) for column vector (\(\mathtt{x y}\)). In its place, Teck Kim’s notes appeared to compress all these into one question: “How would you calculate the magnitude?” On the surface, it appears that Teck Kim was less explicit than the textbook in the computational process of magnitude. But a careful examination into the pre-module interview reveals that the compression of this section into a question was deliberate . . . He meant to use the question to trigger students’ initial thoughts on the manner—which would then serve to ready their frame of mind when the teacher explains the procedure in class.

So, it is not the case that explanation has been removed—only that the teacher has moved the explication of vector magnitude into the Explicit To section of the process. We can also notice, then, in this Explicit From phase, that the teacher makes use of both dual coding and variation theory in his compression of the to-be-explained material. The text in the teacher’s work is placed directly next to the diagram as labels to describe the meaning of each component of the vector, and the vector that students are to draw varies minimally from the one demonstrated: a change in sign is the only difference, allowing students to see how negative components change the direction of a vector. All much more efficient and effective than the textbook’s try at the same material.

Explicit Within

Intriguingly, Explicit Within is harder to explain than the other two, but is closer to the work I do every day. A quote from the article nicely describes explicitation within the teacher’s own lesson plan as an “inter-unit implicit-to-explicit strategy”:

This inter-unit implicit-to-explicit strategy reveals a level of sophistication in the crafting of instructional materials that we had not previously studied. The common anecdotal portrayal of Singapore mathematics teachers’ use of materials is one of numerous similar routine exercise items for students to repetitively practise the same skill to gain fluency. In the case of Teck Kim’s notes, it was not pure repetitive practice that was in play; rather, students were given the opportunity to revisit similar tasks and representations but with added richness of perspective each time.

We saw a very small example of explicit-within above as well. The plan, following the textbook, would have delayed the introduction of negative components of vectors, but Teck Kim introduces it early, as a variational difference. The idea is not necessarily that students should know it cold from the beginning, but that it serves a useful instructional purpose even before it is consolidated.

Explicit To

Finally, there is Explicit To, which refers to the classroom implementation of explicitation, and which needs no lengthy description. I’ll leave you with a quote again from the paper.

No matter how well the instructional materials were designed, Teck Kim recognised the limitations to the extent in which the notes by itself can help make things explicit to the students. The explicitation strategy must go beyond the contents contained in the notes. In particular, he used the notes as a springboard to connect to further examples and explanations he would provide during in-class instruction. He drew students’ attention to questions spelt out in the notes, created opportunities for students to formulate initial thoughts and used these preparatory moves to link to the explicit content he subsequently covered in class.

Interleaving

research

Inductive teaching or learning, although it has a special name, happens all the time without our having to pay any attention to technique. It is basically learning through examples. As the authors of the paper we’re discussing here indicate, through inductive learning:

Children . . . learn concepts such as ‘boat’ or ‘fruit’ by being exposed to exemplars of those categories and inducing the commonalities that define the concepts. . . . Such inductive learning is critical in making sense of events, objects, and actions—and, more generally, in structuring and understanding our world.

The paper describes three experiments conducted to further test the benefit of interleaving on inductive learning (“further” because an interleaving effect has been demonstrated in previous studies). Interleaving is one of a handful of powerful learning and practicing strategies mentioned throughout the book Make It Stick: The Science of Successful Learning. In the book, the power of interleaving is highlighted by the following summary of another experiment involving determining volumes:

Two groups of college students were taught how to find the volumes of four obscure geometric solids (wedge, spheroid, spherical cone, and half cone). One group then worked a set of practice problems that were clustered by problem type . . . The other group worked the same practice problems, but the sequence was mixed (interleaved) rather than clustered by type of problem . . . During practice, the students who worked the problems in clusters (that is, massed) averaged 89 percent correct, compared to only 60 percent for those who worked the problems in a mixed sequence. But in the final test a week later, the students who had practiced solving problems clustered by type averaged only 20 percent correct, while the students whose practice was interleaved averaged 63 percent.

The research we look at in this post does not produce such stupendous results, but it is nevertheless an interesting validation of the interleaving effect. Although there are three experiments described, I’ll summarize just the first one.

Discriminative-Contrast Hypothesis

But first, you can try out an experiment like the one reported in the paper. Click start to study pictures of different bird species below. There are 32 pictures, and each one is shown for 4 seconds. After this study period, you will be asked to try to identify 8 birds from pictures that were not shown during the study period, but which belong to one of the species you studied.



Once the study phase is over, click test to start the test and match each picture to a species name. There is no time limit on the test. Simply click next once you have selected each of your answers.

Based on previous research, one would predict that, in general, you would do better in the interleaved condition, where the species are mixed together in the study phase, than you would in the ‘massed,’ or grouped condition, where the pictures are presented in species groups. The question the researchers wanted to home in on in their first experiment was about the mechanism that made interleaved study more effective.

So, their experiment was conducted much like the one above, except with three groups, which all received the interleaved presentation. However, two of the groups were interrupted in their study by trivia questions in different ways. One group—the alternating trivia group—received a trivia question after every picture; the other group—the grouped trivia group—received 8 trivia questions after every group of 8 interleaved pictures. The third group—the contiguous group—received no interruption in their study.

What the researchers discovered is that while the contiguous group performed the best (of course), the grouped trivia group did not perform significantly worse, while the alternating trivia group did perform significantly worse than both the contiguous and grouped trivia groups. This was seen as providing some confirmation for the discriminative-contrast hypothesis:

Interleaved studying might facilitate noticing the differences that separate one category from another. In other words, perhaps interleaving is beneficial because it juxtaposes different categories, which then highlights differences across the categories and supports discrimination learning.

In the grouped trivia condition, participants were still able to take advantage of the interleaving effect because the disruptions (the trivia questions) had less of an effect when grouped in packs of 8. In the alternating trivia condition, however, a trivia question appeared after every picture, frustrating the discrimination mechanism that seems to help make the interleaving effect tick.

Takeaway Goodies (and Questions) for Instruction

The paper makes it clear that interleaving is not a slam dunk for instruction. Massed studying or practice might be more beneficial, for example, when the goal is to understand the similarities among the objects of study rather than the differences. Massed studying may also be preferred when the objects are ‘highly discriminable’ (easy to tell apart).

Yet, many of the misconceptions we deal with in mathematics education in particular can be seen as the result of dealing with objects of ‘low discriminability’ (objects that are hard to tell apart). In many cases, these objects really are hard to tell apart, and in others we simply make them hard through our sequencing. Consider some of the items listed in the NCTM’s wonderful 13 Rules That Expire, which students often misapply:

  • When multiplying by ten, just add a zero to the end of the number.
  • You cannot take a bigger number from a smaller number.
  • Addition and multiplication make numbers bigger.
  • You always divide the larger number by the smaller number.

In some sense, these are problematic because they are like the sparrows and finches above when presented only in groups—they are harder to stop because we don’t present them in situations that break the rules, or interleave them. Appending a zero to a number to multiply by 10 does work on counting numbers but not on decimals; addition and multiplication do make counting numbers bigger until they don’t always make fractions bigger; and you cannot take a bigger counting number from a smaller one and get a counting number. For that, you need integers.

Notice any similarities above? Can we please talk about how we keep kids trapped for too long in counting number land? I’ve got this marvelous study to show you which might provide some good reasons to interleave different number systems throughout students’ educations. It’s linked above, and below.

Sicklied O’er

research

My grandfather used to tell me a story about a young boy who was stuck in traffic with his family for hours because an 18-wheeler had got itself pinned under an overpass bridge ahead of them. The huge truck was wedged in so strongly and strangely that a flock of engineers had descended on the scene. They argued back and forth about their favorite physical and mathematical models that would unpin the trapped vehicle and release the miles-long stream of cars idling behind it on the freeway. This bickering went on for hours—until the boy got out of his car, walked up to the group of engineers, and shouted, “Why don’t you just let the air out the tires!”

It’s a nice story, precisely because it’s so rare and noticeable. We don’t notice unbroken strings of solved problems from experts, because that’s what we expect of experts—and, for the most part, what we get from them. We notice when they fail. And, because these failures are more noticeable than the far more boring and numerable successes, we fall prey to availability bias, and assume that expert failure occurs with much more regularity than it actually does. (In turn, we start to think that it’s maybe a good idea to keep students naive and, therefore, creative and open-minded rather than have them study things that other people have already figured out.) As Tom Nichols writes in The Death of Expertise:

At the root of all this is an inability among laypeople to understand that experts being wrong on occasion about certain issues is not the same thing as experts being wrong consistently on everything. The fact of the matter is that experts are more often right than wrong, especially on essential matters of fact. And yet the public constantly searches for the loopholes in expert knowledge that will allow them to disregard all expert advice they don’t like.

A 2008 study which put this folk notion of expert inflexibility to the test compared chess experts and novices, and measured the famous Einstellung effect in both groups across three experiments.

In the first experiment, the experts were given the board on the left and were instructed to find the shortest solution. The board on the left is designed to activate a motif familiar to chess experts (and thus activate Einstellung)—the smothered mate motif—which can be carried out using 5 moves. A shorter solution (3 moves) also exists, however.

If the experts failed to find the three-move solution, they were then given the board on the right. This board can be solved by the shorter three-move solution but not by the Einstellung motif of the smothered mate. The group of novices in the experiment were all given this second board (the one on the right) featuring the three-move mate solution without the Einstellung motif as well.

Findings

If knowledge corrupts insight, as it were, then the experts would, by and large, be fixated by the smothered mate sequence and miss the three-move solution. And this is indeed what happened—sort of. What the researchers found was that level of expertise correlated strongly with the results. Grandmasters (those with the highest levels of chess expertise) were not taken in by the Einstellung motif at all. Every one of them found the optimal three-move solution. However, experts with lower ratings, such as International Masters, Masters, and Candidate Masters, all experienced the Einstellung effect, with 50%, 18%, and 0%, respectively, finding the shorter solution on the first board, even though all of them found the optimal solution when it was presented on the second board, in the absence of the smothered mate motif.

The novices’ performance showed a positive correlation with rating also. Sixty-three percent of the highest rated (Class A) players in the novices group found the optimal solution on the right board, while 13% of Class B players and 0% of Class C players found the three-move solution. Thus, the Einstellung effect made International Masters experts perform like Class A players, Master players perform like Class B players, and Candidate Masters perform like Class C players.

Experiment 2 replicated the above finding in a slightly more naturalistic setting, and Experiment 3 did so with strategic Einstellungs instead of tactical ones.

Knowledge Is Essential for Cognitive Flexibility

While this study shows that Einstellung effects are powerful and observable in expert performance, it also demonstrates that the notion that expertise causes cognitive inflexibility is probably wrong.

The failure of the ordinary experts to find a better solution when they had already found a good one supports the view that experts can be vulnerable to inflexible thought patterns. But the performance of the super experts shows that ‘experts are inflexible’ would be the wrong conclusion to draw from this failure. The Einstellung effect is very powerful—the problem solving capability of our ordinary experts was reduced by about three SDs when a well-known solution was apparent to them. But the super experts, at least with the range of difficulty of problems used here, were less susceptible to the effect. Greater expertise led to greater flexibility, not less.

Knowledge, and the expertise inevitably linked to it, were also responsible for both forms of expert flexibility demonstrated in the experiments. The optimal solution was more likely to be noticed immediately, even before the nominally more familiar solution, among some super experts. Hence, expertise helped super experts avoid an Einstellung situation in the first place because they immediately found the optimal solution. Even when experts did not find the optimal solution immediately, expertise and knowledge were positively associated with the probability of finding the optimal solution after the non-optimal solution had been generated first. Finally, when knowledge discrepancy was minimized, as in the third experiment, super experts had sufficient resources to outperform their slightly weaker colleagues. In all three instances, knowledge was inextricably and positively related to expert flexibility. . . .

The training required to produce experts should not be seen as a source of potential problems but as a way to acquire the skill to deal effectively and flexibly with all the situations that can arise in the domain. Creativity is a consequence of expertise rather than expertise being a hindrance to creativity. To produce something novel and useful it is necessary first to master the previous knowledge in the domain. More knowledge empowers creativity rather than hurting it (e.g., Kulkarni & Simon, 1988; Simonton, 1997; Weisberg, 1993, 1999).

Makin’ Copies

research

At the heart of many calls to improve education is the taken-for-granted notion that because the world is now changing so rapidly, it is better for schools to focus on producing innovative and critical thinkers and ‘not just’ knowledgable students. The common instructional approach deployed, at all scales, to produce this effect—whether it is inquiry learning or personalized learning—is to remove or dramatically lessen the influence of knowledgable others.

Copying the effective behaviors of knowledgable others was a much more effective learning strategy than learning directly from the environment.

But important research on learning strategies in the wild shows that, at the very least, different intuitions are possible here. Researchers discovered—much to their surprise—that, in a rapidly changing environment, copying the effective behaviors of knowledgable others (social learning) could be a much more effective learning strategy than learning directly from the environment (asocial learning). This result held even when social learning was “noisy” and asocial learning was noise free.

The team has gone on to further investigate and apply their findings to other animal studies, and a book, Darwin’s Unfinished Symphony, was released just last year, detailing their work.

Social Learning Strategies Tournament

The method used for this research was a tournament in which the researchers designed a computer simulation environment and entrants to the tournament (104 in all) designed ‘agents’ that competed to survive in the generated environment by learning behaviors and applying them to receive payoffs for those behaviors. Each agent had three possible moves it could play: Observe, Innovate, or Exploit. The first two of these moves—Observe and Innovate—were learning moves, which allowed the agent to acquire new behaviors (or not in some cases), and the third move, Exploit, allowed agents to apply their acquired behaviors to receive a payoff (or not, depending on the environment and the behavior). As was mentioned above, Observe moves were “noisy,” whereas Innovate moves were noise free:

Innovate represented asocial learning, that is, individual learning stemming solely through direct interaction with the environment, for example, through trial and error. An Innovate move always returned accurate information about the payoff of a randomly selected behavior previously unknown to the agent. Observe represented any form of social learning or copying through which an agent could acquire a behavior performed by another individual, whether by observation of or interaction with that individual. An Observe move returned noisy information about the behavior and payoff currently being demonstrated in the population by one or more other agents playing Exploit. Playing Observe could return no behavior if none was demonstrated or if a behavior that was already in the agent’s repertoire is observed and always occurred with error, such that the wrong behavior or wrong payoff could be acquired. The probabilities of these errors occurring and the number of agents observed were parameters we varied.

Some Key Findings

When the winning agent, which learned primarily by copying, was modified to learn only through Innovate moves, it placed last.

It was not effective to play a lot of learning moves. But when learning moves were played, agents which relied almost exclusively on Observe outperformed the rest, and an increase in copying was strongly positively correlated with higher payoffs. When the winning agent (called DISCOUNTMACHINE) was modified to learn only through Innovate moves, it placed last.

Even when learning by copying was made noisier—the probability and size of copying errors increased—agents which relied on it heavily still did best.

Finally, agents who combined asocial and social learning in more balanced ways (winning agents used social learning at least 95% of the time) performed worse than those who opted for social learning most of the time.

Why Copying Is Effective

It must be underscored, again, that, in more naturalistic environments there is a cost to asocial learning that copying does not have. Learning by observation is safer than learning by interacting directly with the environment, alone. But in this simulation, that cost was erased. And social learning (copying) STILL outperformed innovation, even when social learning was noisy (Observe “failed to introduce new behavior into an agent’s repertoire in 53% of all the Observe moves in the first tournament phase, overwhelmingly because agents observed behaviors they already knew”).

So, why was copying effective? The researchers boiled it down to being surrounded by rational agents, which I choose to rephrase as “knowledgable adults”:

Social learning proved advantageous because other agents were rational in demonstrating the behavior in their repertoire with the highest payoff, thereby making adaptive information available for others to copy. This is confirmed by modified simulations wherein social learners could not benefit from this filtering process and in which social learning performed poorly. Under any random payoff distribution, if one observes an agent using the best of several behaviors that it knows about, then the expected payoff of this behavior is much higher than the average payoff of all behaviors, which is the expected return for innovating. Previous theory has proposed that individuals should critically evaluate which form of learning to adopt in order to ensure that social learning is only used adaptively, but a conclusion from our tournament is that this may not be necessary. Provided the copied individuals themselves have selected the best behavior to perform from at least two possible options, social learning will be adaptive.

Any takeaways for education from this will be stretches. The research was a computer simulation, after all. But, whatever. My takeaway from all this is that, as long as there are knowledgable adults around, we should encourage students to learn directly from them. A milder takeaway (or maybe stronger, depending on your point of view): regardless of how adept you feel yourself to be in your social world, social worlds are not intuitive. What seems to make sense to you as a strong connection between ideas A and B (in this case, changing world → promote innovation) will not necessarily be effective just because a lot of people believe it and it makes intuitive sense. The way to change that is not to stop making those arguments, because few people do. The way to change it is to stop forwarding those kinds of arguments along when they are made. That way, the behavior won’t be copied. : )

Coda

I should add, by way of the quote below from Darwin’s Unfinished Symphony, that, although copying was a more successful strategy than innovating, it was not, by itself, the reason for success. What made the difference was better, more efficient, more accurate copying behaviors:

The tournament teaches us that natural selection will tend to favor those individuals who exhibit more efficient, more strategic, and higher-fidelity (i.e., more accurate) copying over others who either display less efficient or exact copying, or are reliant on asocial learning.

 

Proprioceptive Knowledge

This paper ($) was a nice read, with some fresh (to me) insights about discovery, instruction, and practice. There are many points in it where I don’t see eye to eye with the author, but those parts of the text are, thankfully, brief. I took away some new thoughts, at any rate, the most robust of which was an analogy between learning and proprioception, as the title suggests.

Here are the two main ideas as I see them, involving a very healthy amount of paraphrasing and extrapolation on my part:

  1. An aspect of your learning over any topic that deserves attention from instruction is your subjective, first-person, thinking with the material taught.
  2. Good instruction not only manipulates you into knowing something, but enlists your cooperation in doing so.

Proprioception

Proprioception is the basic human sense of where your body parts are in space and the sense of your own movement in that space (i.e., you don’t use ‘touch’ to know where your left hand is in space; this is proprioception). For learning something abstract like adding fractions with unlike denominators, we might think of proprioceptive knowledge as what it is LIKE to add fractions with unlike denominators—physically, cognitively, etc. Certainly carrying out the computations procedurally is an important part of “what it is like” but there are many others, including “what it is like” to identify situations calling for working out common denominators.

The first paper uses as a candidate for proprioceptive knowledge (although they don’t call it that) an example of working long division to produce repeating decimals. Students are instructed, with an example, that for any number you write as an integer over an integer, the decimal digits will either be repeating zeros or a repeating pattern of some other kind. Students use practice, however, to gain access to the proprioceptive dimension of this instruction—the experience that this is indeed the case; a first-person view of the knowledge. It is not that they are not told why the digits repeat (there are a finite number of remainders that are linked together in what they produce) during the instruction with the example. They are told this. And it’s not necessarily the case that the students don’t understand what they have been told. It’s just that the first-person experience of this is an important node in the constellation of connections that constitute the schema of understanding rational numbers.

Indeed, I would argue that the explicit instruction is absolutely necessary in this example (and almost all other examples). It allows the student to connect his newly gained first-person proprioceptive conceptual knowledge about division and rational numbers with what he understands to be experts’ experience and see them as the same. In other words, the explicit instruction did not take away the student’s ability to discover something for himself, as the idiotic trope goes; on the contrary, it was necessary to facilitate the student’s discovery. Cristina says it well:

Cristina [the teacher in the example] did not consider it a crime to “tell secrets”—like the author, she believed that students have to work to figure them out anyway.

Making Connections

This perspective helps me make some intuitive sense of why, for example, retrieval practice may be so effective. Testing yourself gives you the first-person experience of—the proprioceptive sense of—what it is like to remember something successfully and consistently (and it ain’t as easy as you think it is before you do it). This knowing-what-it-is-like episodic knowledge is a knowing like any other, but it is one that can be easily neglected when dealing with cognitive skills and subject matter.

Or, rather than neglect proprioceptive knowledge, we tend to make it dramatically different from the conceptual, such that students don’t connect the two. It is not necessarily the case that our conceptual instruction is anemic (though it certainly is on occasion) but that our procedural instruction is so narrow, and when it is not narrow it is not procedural. As a result, students gain a strong sense of what it is like to find like denominators, for example, but little sense of what it is like to move around in “higher level” aspects of that topic. When it comes to those higher level aspects, we have done no better in education, really, than to just have students do things in order to learn things or just have students learn things in order to do things.

The episodic knowledge angle can also help make some sense out of the power of narrative, as this is a type of out-to-in instruction which contemporaneously solicits an in-to-out response. (Although, it seems to me to push too hard in the latter direction for an ideally holistic type of instruction.)

How to ‘Do’ Proprioceptive Knowledge

The above suggests some instructional moves. The paper itself suggests building this in-to-out proprioceptive knowledge during practice. Rather than relying on problems which hit only the procedural writing of fractions as terminating or repeating decimals, for example, practice can assist students in “discovering” the explicit conceptual instruction of the lesson from a brand-new, first-person perspective. So, if we want students to understand why rational numbers must have terminating or repeating decimal representations, we need to give them practice that explicitly allows them to ‘feel’ that connection and express it as they work.

For another example, I’m currently working on percents. I certainly want students to be able know what it’s like to determine a percent of a number, purely procedurally. But I also want them to understand a lot of other things: that p% of Q is greater than Q when p is greater than 100 (and why; because it’s multiplying Q by a number greater than 1) and less than Q when p is less than 100 (because it’s multiplying Q by a number less than 1); and so on. I should think about structuring my practice so that these patterns reveal themselves and explicitly point students to them.


Underlying, Deep, Critical?

Here’s a very reasonable statement, from this book, on techniques used by researchers to investigate conceptual knowledge of arithmetic:

The most commonly used method is to present children with arithmetic problems that are most easily and quickly solved if children have knowledge of the underlying concepts, principles, or relations. For example, if children understand that addition and subtraction are inversely related operations, even when presented with a problem, such as 354297 + 8638298 – 8638298, they should be able to quickly and accurately solve the problem by stating the first number. This approach is typically called the inversion shortcut.

Although, this borders on the problematic (for me at least). Why should ‘underlying’ be a prerequisite for calling something conceptual knowledge as opposed to plain old knowledge? Even the straightforward addition and subtraction here presumably requires knowing what to do with the numbers and symbols presented in this (likely) novel problem and thus involves conceptual knowledge of some kind.

Still, it makes some sense to distinguish between knowing how to add and subtract numbers and knowing that adding and then subtracting (or subtracting, then adding) the same number is the same as adding zero (or doing nothing). But the following just a few paragraphs later doesn’t make much sense to me:

The use of novel problems is important. Novel problems mean that children must spontaneously generate a new problem solving procedure or transfer a known procedure from a conceptually similar but superficially different problem. In this way, there is no possibility that children are using the rote application of a previously learned procedure. Application of such a rotely learned procedure would mean that children are not required to understand the concepts or principles being assessed in order to solve the problem successfully.

deep learning

The biggest problem is that the concept of ‘conceptual knowledge’ of arithmetic laid out here relies on the fact that the “inversion shortcut” is not typically taught as a procedure. But it seems easily possible to train a group of students on the inversion shortcut and then sneak them into a research lab somewhere. After the experiment, the researcher would likely decide that all of the students had ‘conceptual knowledge’ of arithmetic, even though the subjects would be using the “rote application of a previously learned procedure”—something which contradicts the researcher’s own definition of ‘conceptual knowledge’. On a larger scale, instead of sneaking a group of trained kids into a lab, we could emphasize the concept of inversion in beginning arithmetic instruction in schools. If researchers were not ready for this, it would have the same contradictory effect as the smaller group of trained students. If the researchers were ready for it, then the inversion test would have to be thrown out, as they would be aware that inversion would be more or less learned and, thus (for some reason) not qualify as conceptual knowledge anymore.

Second, why should adding and subtracting the numbers from left to right count as an application of a rote procedure (which does not evidence conceptual knowledge) rather than as a transfer of a known procedure from a conceptually similar but superficially different problem (which does show evidence of conceptual knowledge)? The problem is novel and students would be transferring their knowledge of addition and subtraction procedures to a situation also involving addition and subtraction (conceptually similar) but with different numbers (superficially different).

Clearly I Don’t Get It

I still see the value of knowing the concept of inversion, as described above. A person who notices the numbers above and can solve the problem without calculating (by just stating the first number given) is, most other things being equal, at an advantage compared to someone who can do nothing else but start number crunching (it’s also possible to not notice the equal numbers because you’re tired, not because you lack some as-yet undefined ‘critical thinking’ skill). What constantly perplexes me is why people insist on making something like knowing the inversion shortcut so damned mysterious and awe-inspiring.

You can know how to number crunch. That’s good to know. You can also know how to notice equal numbers and that adding and then subtracting the same value is the same as adding 0. That’s another good thing to know. The latter is probably rarer, but that fact alone doesn’t make it a fundamentally different kind of knowledge than the former. It is almost certainly rarer in instruction than calculation directions, so it should be no surprise that students are weaker on it generally. Let’s work to make it not as rare. A good place to start would be to acknowledge that inversion is not some deep or critical knowledge; it’s just ordinary knowledge that some people don’t know or apply well.

Coda

The section in question concludes:

Other concepts, such as commutativity, that is if a + b = c then b + c = a, have been investigated, but as they have not received as much research attention it is more difficult to draw strong conclusions from them compared to the concepts of inversion and equivalence. Also, concepts, such as commutativity are usually explicitly taught to children so, unlike novel problems, such as inversion and equivalence problems, it is not clear whether children are applying their conceptual knowledge when solving these problems or applying a procedure that they were taught and the conceptual basis of which they may not understand.

But how does it show that ‘conceptual knowledge’ is applied when we test students on something they haven’t been taught (do not know)? Where is the knowledge in conceptual knowledge supposed to be coming from? As long as it’s not from the teacher, it must be ‘conceptual’?