Explicit Instruction in Problem Solving

I think the main idea of my previous post is that academic learning is essentially a set of tricks, all the way down. Because many of these tricks are sped up and automatized, and learning them has faded in memory, we can’t notice all their tricky parts in consciousness—we are practically forced to think about them as higher-level constructs, because that is all we can see when we look at our own thinking and problem solving.

We see this, for a somewhat strained example, when looking at our thinking, which we would describe (if you can see it) as a circle going back and forth from left to right in a loop.


But that's only a description of what you see happening. It is not, in fact, what is happening at all. And I don't just mean that the computer knows nothing about "circle," "back and forth," or "left to right," although that's true. I mean that, even at the higher level of the code itself, what is happening is that the canvas is being completely erased, the circle center is moved left or right 4 pixels according to a RULE, and then the circle is redrawn at the new location.

This is easy to swallow when the end result is just a circle going back and forth on the screen. But it's important to remember that scaling up the complexity of this end result is not an indication that the process that produced the circle changed in any fundamental way. The most thrilling thing you saw in a theater this year operates under the same basic principle.

Although I need to move on to the research I want to talk about in this post, I think the implications of the above (if they are basically correct) deserve a lot more reflection in education. Certainly one implication worth pondering is that anyone who is successful at a certain endeavor is, by reason of this success alone, no better than an amateur at describing the replicable mechanism for that success.

Yet, for endeavors related to academic learning at least, this does not mean that replicable mechanisms don't exist.

The Magic of Solving Problems

In mathematics education, problem solving is the bouncing ball, the entrepreneurs' advice. The processes that seem to govern its successful execution in any particular case are almost certainly not the ones that actually do. As a demonstration of this, Darch, Carnine, and Gersten (1984) investigated an explicit instruction method for teaching fourth graders how to translate mathematics word problems involving division and multiplication into equation form. This was tested against what the authors called "basal instruction," which they describe, in part, as follows:

The teacher-guided portion of the lesson involved two components: (a) discussion designed to increase student involvement and motivation; and (b) teacher presentation of strategies to solve problems. . . .

The second key element in the teacher-guided segment was presentation of an approach for developing problem-solving strategies. This aspect appeared in each of the four state-adopted texts . . . A four-step system was presented to the students with the steps sequentially placed in a line above boxed areas for writing. The steps were: (1) place numbers here; (2) identify correct operation; (3) write and complete number sentence; and (4) place your answer here.

You'll note that although the instruction here provides a system for attacking problem solving in steps, it does not teach students how to identify the correct operation in Step 2. Carnine and Gersten's explicit instruction method, described—again, in part—below, focused almost exclusively on this step.

Students first learned to discriminate multiplication problems from addition problems. The rule students were taught was: "If you use the same number again and again, you multiply." . . .

The exercises introduced the students to the concept of "number families," and the relationships between the "big number" and two "little numbers" in each family (for multiplication and division) . . . students also learned that when the big number was not given, they multiplied the two smaller numbers to determine it. Conversely, when the big number was given, the students divided. . . .

In the final step of the teaching sequence, students learned to ask two questions to discriminate among addition, subtraction, multiplication, and division story problems: Does the story deal with the same number again and again? [if so, it's multiplication or division] . . . Does the story give the big number? [if so, it's multiplication; if not, division].

Results

As you might have guessed while muttering something about teaching to the test like I did, students taught using the explicit method outperformed those taught with the typical "basal instruction." On a 26-item posttest containing multiplication, division, addition, and subtraction word problems, which assessed only students' "ability to write the correct computation statement," and not correct answers, students taught using the explicit strategy obtained an average score of 86.5%, while their counterparts taught with typical basal instruction scored, on average, 63.7%. Thirty-one of the 36 students in the explicit-strategy group scored at or above 80%. In the basal group, just 9 students, out of 38, did the same.

Why Do We React So Negatively to This Type of Instruction?

A typical reaction to reading the details of explicit strategies like the above involves two mistakes, I think. The easier mistake is to assume that the method is supposed to represent the full flower of childrens' work with understanding and solving word problems and problem solving in general. We are simply too used to the idea that modern pedagogies must represent their intended end result at every scale of interaction with students, and, conversely, we must take every level of interaction with students—from activity to course—as representative of a pedagogy's intended aim. The harder mistake is to assume that you can increase the generality of instruction sorely missing from the explicit method outlined above by making it less direct or less explicit. It is certainly a mistake to think that if you have taught it students have learned it. But it's something close to light-speed stupid to assume that this implies that if you don't teach it, they will learn it.

problem solving

Regardless, though, a big problem with this negativity (from me, included) is that it prevents us from learning and improving our own instruction. The method presented in this paper reminds me of my days working with elementary-level texts and fact triangles. Using fact triangles side by side with operational problem solving as a kind of model is a really cool idea. I don't have to use the terms "big number" and "little numbers." And I don't have to imagine at any moment that I'm giving up and just teaching these kids how to get good scores on the end-of-year test.

What I have to face is that there is something about problem solving that I can't see when looking at my own thinking, and what I can't see that I'm doing might very well be something like "Does the story deal with the same number again and again? [if so, it's multiplication or division] Does the story give the big number? [if so, it's multiplication; if not, division].


Reference: Craig Darch, Doug Carnine and Russell Gersten, The Journal of Educational Research, Vol. 77, No. 6 (Jul–Aug., 1984), pp. 351–359

Conceptual Change Déjà Vu

In the middle of my new reading (this guy). Results like the following, involving conceptual change, taken together, have some implications for how we deliver education, and how we conceptualize improvements in education:

In recent years, researchers have monitored scientists’ brains with fMRI as the scientists reason through two types of problems: problems that everyone (scientists and nonscientists alike) can answer correctly and problems that only the scientists can answer correctly. On the first type of problem, scientists show patterns of neural activity similar to those experienced by nonscientists, but on the second, they show more activity in areas of the brain associated with inhibition and conflict monitoring: the prefrontal cortex and the anterior cingulate cortex. Scientists can answer scientifically challenging problems—that’s the benefit of their expertise—but to do so, they must inhibit ideas that conflict with their scientific knowledge of those problems. They must inhibit latent misconceptions.

So, for instance, the great hope of conceptual change methodologies, the improvement they are after, is that Student X will be able to move around more freely and naturally within the academic disciplines (like say, mathematics) than his or her historical counterpart. This he or she of the present or near future will, for example, understand division or negative numbers or fractions deeply and flexibly, whereas his or her counterpart had to rely on a huge rickety scaffold of mechanical, arbitrary, partly memorized rules.

But what results like the above—again, taken collectively—are saying is that this dream of fine tuning students’ cognition so that it eventually, essentially becomes as one with expert academic thinking is never gonna happen. Or perhaps more positively: even professionals in their field do not think “naturally” about their subjects. They can rely on (presumably taught) artificial, mechanical, partly memorized rules for representing academic subject matter to themselves.

Tips and Tricks

To stand on the opposing side for a moment, though, I should mention that I have a hard time with the idea that conceptual change may not be possible. This essay of mine from not too long ago is soaked in conceptual change thinking, for instance. More recently, in a lesson app I developed on linear equations, I sucked it up and devised steps (one kind of cognitive scaffold) for breaking down a situation in order to write a linear equation to represent it. But I waffled about it for a long time.

conceptual change

The fear, of course, is that students will hungrily devour the steps while making as little contact with the math as possible. One of these students will appear on video in a psychology experiment in the future, where he will mindlessly rattle off these steps, even though the problem situation requires an exponential model, not a linear one, and the whole world will shake its head and sigh.

But if conceptual change is not a robust and realistic representation of what happens during effective learning, then I see no reason for the fear. The students using tips and tricks today will just turn into adults using tips and tricks tomorrow. They’ll just be better at it because time. (None of this means that some tricks aren’t dumb or harmful [rich debate to be had there]; it just means that they’re not dumb or harmful just because they’re “tricks”.) In fact, these future adults will become so good at their tricks and steps, they’ll forget that they are using tricks and steps—clumsy introspection will reveal to them that their understanding is rich, connected, and holistic—and then some of them will start to wonder if making kids see things the creative way they do will make the world a better place.

And then we’ll do it all over again.

Update: A nice quote from the end of the Motion chapter. Here I think are some nice parallels between “bridging” as conceptual change and “sidestepping” as the tips-and-tricks tack:

Sidestepping and bridging are not mutually exclusive strategies. Each achieves something different yet complementary. The bridging strategy renders counterintuitive scientific ideas (say, the normal force) intuitive but does not explain those ideas, either in terms of an underlying mechanism (molecular bonds) or an overarching framework (Newton’s third law). The sidestepping strategy, on the other hand, provides a framework for explaining counterintuitive scientific ideas but does not render them intuitive. Educators might thus want to use both strategies in the same lesson, as Clement recommends they should. The world is a complex place, and acquiring an accurate understanding of that world is a complex process.


Independent Events

Honestly, sometimes my first off-the-cuff measure of how well we do collectively with a topic in mathematics is how well I do with it. This is terrible reasoning, of course, but it has some uses as a kind of first measurement that needs to be independently verified. It helps me notice potential weak spots in instruction, anyway, like with independent events.

The concept of independent events is certainly a candidate for being a weak spot. Most of what I’ve seen online doesn’t really crack the surface. Students are allowed to explore the concept of independent events, but all that seems to mean is to take a situation and tell me whether the events are independent. Big whoop.

And the definition often used—that two events are independent if the occurrence of one does not affect the probability of the other—has the potential of really confusing students (and me). Consider this situation.

independent events

Are the events “selecting a letter card” and “selecting a circle card” independent events? The way a student (and I) might reason, given just the definition above, would be to think, “Well, if I pick B, that would definitely change the probability of picking a circle, because the B-card is also a circle card. And, if I pick A, that would change the probability of picking a circle card, because then I would only have three cards to choose from. So, the events are not independent.”

Try This Instead

The above was perfectly sane reasoning; it’s just wrong because of a terrible explanation (or lack of an explanation in most cases). I thought of something maybe a little better. Here it is in sentence form, similar to how we worked on the impenetrability of trig ratios:

You have the same probability of choosing a letter card from all the cards as you have of choosing a letter card from just the circle cards.

This is what makes choosing a letter card and choosing a circle card “independent.” If I know that I’ve drawn a circle, the probability that I’ve also drawn a letter, \(\mathtt{0.5}\), is the same as if I didn’t know I’d drawn a circle. And of course this works automatically the other way around too: if I know that I’ve drawn a letter card, the probability of drawing a circle is the same as if I didn’t know. If I reduce the sample space from all cards to circle cards, the probability of “letter” is the same.

At the heart of independent events (besides the conditional probability flavoring above) are equivalent ratios, or a proportion . . . which gives me an ultra-short way of saying it a little more mathematically:

\[\mathtt{\frac{2}{4} = \frac{1}{2}}\]

Or, “2 letter cards out of 4 cards in all is the same as 1 letter card out of 2 circle cards.” In the symbolism of probability, we actually write this with complex fractions (by giving each numerator and denominator above a denominator of 4), and then disguise the complex fractions with \(\mathtt{P()}\) statements, which is all equivalent to the above proportion:

\[\mathtt{\frac{\color{purple}{\frac{2}{4}}}{\color{red}{\frac{4}{4}}} = \frac{\color{red}{\frac{1}{4}}}{\color{purple}{\frac{2}{4}}} \longrightarrow \frac{\color{purple}{P(\textrm{letter})}}{\color{red}{1}} = \frac{\color{red}{P(\textrm{letter and circle})}}{\color{purple}{P(\textrm{circle})}}}\]

And that gives us the definition of independence using conditional probability, from S-CP.A.3. If we remember our proportion work from way back when, then the “other” test for the independence of two events pops out of the equivalence of the products of means and extremes: \[\mathtt{P(\textrm{letter}) \cdot P(\textrm{circle}) = P(\textrm{letter and circle})}\]

The complex fraction part of this explanation seems to be the most important, actually. And we don’t really do a good job of letting kids in on that disguise either. But still, stapling the idea of independent events to a pair of equivalent ratios (a proportion) helps the whole idea make a lot more sense to me. And, truthfully, it makes the notion of “independence” as “not having an effect on another probability” seem almost wrong.

Update: This kind of reasoning works for the typical example of independent events. The situation involving separate spinners is fairly easy for kids to identify as being about independent events, but like a lot of other topics in mathematics education, we start off with examples that are easy and also completely misleading. Then we all opine that kids have difficulties because the material gets “harder.” Anyway, spinning a C on the first spinner and a 2 on the second spinner are independent events, but not because there are “independent” spinners.

What’s the proportion (if the events are independent) that matches the situation?

independent events

Assuming we spin the first spinner and don’t know what we get, there are \(\mathtt{3 \times 1}\), or 3, outcomes that have 2 as the second spin, out of 12 possible outcomes. The outcomes are {(A, 2), (B, 2), (C, 2)}. But, if we know that we have spun a C first, then there is 1 outcome showing 2 on the second spinner, out of 4 possible outcomes. So, our proportion is \[\mathtt{\frac{3}{12} = \frac{1}{4}}\]

This is all we need to show that the two events are independent, actually. If that proportion is true, then the events are independent. But we can cue the complex fraction magic again for reinforcement: \[\mathtt{\frac{\color{purple}{\frac{3}{12}}}{\color{red}{\frac{12}{12}}} = \frac{\color{red}{\frac{1}{12}}}{\color{purple}{\frac{4}{12}}} \longrightarrow \frac{\color{purple}{P(2)}}{\color{red}{1}} = \frac{\color{red}{P(\textrm{C and 2})}}{\color{purple}{P(\textrm{C})}}}\]


Living in a World Full of Answers

answers

I‘m really enjoying Ulrich Boser’s new book Learn Better. It is nicely balanced, humble, serious and informative, with a good tempo and a somewhat answers uncanny ability to make me spin to the side in my office chair every once in a while to rest my chin in a finger tent.

In particular, a section on the theme of embracing difficulties in learning got my chair spinning and fingers tenting. Here’s just one snippet:

The practical takeaway here is pretty simple. We need to believe in struggle. We need to know that learning is difficult. What’s more, we need the people around us to believe, too. . . .

This idea is at the heart of Lisa Son’s approach. She’s building norms around the nature of effort, the essence of struggle, the path to expertise. As Son told me, laughing, “I think I overdid it, but if someone gives my kid the answer, she’ll kill you.”

What I would add to this, though, is that, what typifies “struggle” for students in adulthood is not figuring things out for themselves when no one will give them the answers; it’s figuring things out for themselves in a world awash with answers. Students need to be able to deal with answers—from experts and from their peers—while keeping the lights of critical thinking on upstairs. That’s the struggle. And you don’t get practice with that struggle when you spend a lot of your schooling time stuck in a goobery game of hint-hint hide-and-seek with answers.

Students need practice dealing with “answers” from other people—from people of different races and religions, or no religion; from people whom you don’t like or who don’t like you; and from people who are more expert or less expert than you. And they need to be able to figure out that some of those answers are correct, and some of them are not, and other times there is no solid answer even when everyone else is convinced there is, or there is a solid answer when everyone else is convinced there isn’t. Students need practice listening to other people and understanding what they are saying, without feeling that their identity or cognitive liberty has been threatened.

I have to think that, while withholding answers is a good technique to use occasionally (and deliberately and skillfully), as a strategy it can run the risk of producing a generation of narcissistic idiots who close their ears to “answers” they themselves didn’t come up with.


Postscript: The subject of embracing difficulties comes up a little later in the book as well:

As a learning tool, DragonBox does not seem to teach students all that much, though, and people who play the game don’t do any better at solving algebraic equations, according to one recent study. Researcher Robert Goldstone recently examined the software, and he told me that the app didn’t appear to provide any more grounding in algebra than “tuning guitars.”

In the bluntest of terms, there’s simply no such thing as effortless learning. To develop a skill, we’re going to be uncomfortable, strained, often feeling a little embattled. Just about every major expert in the field of the learning sciences agrees on this point. Psychologist Daniel Willingham writes that students often struggle because thinking is difficult.

It’s true that thinking is hard work, actually, but it’s also true that learning is difficult in part because what you are learning, whatever it is, was created by people who think differently from you (at the moment, if you’re a novice). And school—when it is not mostly a game of “guess what’s in my head”—is one of the first places young people are exposed to this kind of thinking.

Trig Ratios as Percents

My audience is mostly folks interested in math education in one way or another, so it’s no use starting this post off with “All you may know about trigonometry ratios is likely captured in the gibberish mnemonic SOHCAHTOA.” Your understanding of trigonometry ratios is no doubt more sophisticated than that.

But have you thought about trig ratios as percents? This will be enough for most of you:

sin θ = \(\mathtt{\frac{opposite}{hypotenuse} = \frac{?}{100}}\) = percent of hypotenuse length

It makes sense when you dredge up the 6th-grade math you remember and start making connections between it and the trigonometric ratios sine, cosine, and tangent (for example). After all, opposite : hypotenuse is the sine ratio, but it’s also just a ratio. If we think of it as a percent, we could say that if the sine of a reference angle is equal to 0.75, that means that the side opposite the angle in a right triangle is 75% the length of the hypotenuse. If the cosine were 0.75, that would mean that the side adjacent to the reference angle is 75% the length of the hypotenuse, since cosine is the ratio adjacent : hypotenuse. And a tangent of 0.75 means that the opposite side is 75% the length of the adjacent side, because tangent is simply the ratio opposite : adjacent.

The percent connection (or fraction; doesn’t have to be percent) strikes me as being immediately more useful for seeing meaning in values for trigonometric ratios. They usually go by students as just values which can’t be put into a sentence—a long list of changing decimals in a lookup table. Yet, the percent connection is right there, waiting for us to combine our middle school math knowledge with new material. We could model what this process of meaning-making actually looks like, rather than just ask them to go make meaning and hope for the best.

Of course, it also helps to be able to visualize what a sine of 0.75 looks like. Try, say, \(\mathtt{49^\circ}\) below on the unit circle and press Enter. That gives me something that looks pretty close to a sine of 0.75 (an opposite side that is \(\mathtt{\frac{3}{4}}\) the length of the hypotenuse, right?).

  θ = °

cos-sin-1
1-tan-sec
cot-1-csc

But the interactive tool, while helpful maybe, isn’t necessary, I don’t think. One can think about drawing a right triangle, say, with an adjacent side length about 80% of the hypotenuse length (a cosine of about 0.8). It will have to be longer than it is tall, relative to the reference angle, to make that work. The percent connection thus links a trigonometry ratio value to a simple and accessible visual.

An Example Problem: Testing Out the Percent Connection

41° 96 x

The basic mathematical (as opposed to contextual) trigonometry practice problem looks like this: Determine the length of \(\mathtt{x}\).

I can’t say the percent connection makes this a faster or more efficient process. What I would say is that knowing that the sine of 41° means the percent of the hypotenuse length represented by the opposite side length makes me feel like I know what I’m doing, other than moving numbers and symbols around. (Thinking about percents also gives us a way to estimate what my \(\mathtt{x}\) will be, if I know that the figure is drawn to scale.)

The sine of 41° is approximately 0.65605902899. With the percent connection, I know that this means that the opposite length is about 65.61% the length of the hypotenuse. It’s hard to overstate, I think, how useful it is to be able to wrap all of this number-and-variable work into one sentence like this: 96 is about 65.61% of x. I can climb the last few steps from there, by either dividing or setting up an equation—however the work happens, I at least have some background meaning to the numbers I’m playing with.

We can continue from there, of course (as we can without the percent connection, but so rarely do because the tedium of setting up and solving for the variable has overloaded us). The tangent of 41°, approximately 0.86928673781, tells us that the opposite side is about 86.93% the length of the adjacent side.

This guy gets it, and he seems to be the only one. It shouldn’t come as any surprise that he’s an experienced mathematics teacher a computer scientist who’s never taught. But, you know, it really should surprise us. Someday.


trigonometry

Motivation Is Caused by Achievement

research

It doesn’t seem right that doing well in math should cause students to have more intrinsic motivation and not the other way around. But this is just what child development researchers found recently, published here at the beginning of last year. In a large sample of students in Grades 1 to 4, the paper’s authors discovered that

achievement predicted intrinsic motivation from Grades 1 to 2, and from Grades 2 to 4. However, intrinsic motivation did not predict achievement at any time.

One reason this may seem incorrect even though it may be correct is that the way we talk about—and thus think about—causality in human affairs evolved long before we were a species that conducted experiments on people. Each of us inherits a language developed by a predominantly dualist, animist, and creationist culture, which spoke about minds, separate from the natural world, that effect change on that world, not the other way around:

It is like quantum physics; we may intellectually grasp it, but it will never feel right to us. When we see a complex structure [like mathematics achievement—JF], we see it as the product of beliefs and goals and desires. Our social mode of understanding leaves it difficult for us to make sense of it any other way. Our gut feeling is that design requires a designer.

intrinsic motivation

Thus it seems backwards to us to suggest that the internal state of a designer (e.g., his ‘motivation’) should have no significant effect on his design (e.g., his mathematical performance). And it seems truly bizarre that the opposite, in reality, is the case. But, again, that is what the results reported here suggest.

The diagram shows the significant cross-grade correlations unearthed in the study. There was a significant correlation between achievement and motivation from Grade 1 to Grade 2 and from Grade 2 to Grade 4. There was no similar correlation from motivation to achievement across grades.

Don’t Stop. Believin’.

There are many caveats, as there are with any study. You can take a look yourself at the final manuscript available online. Motivation was self-reported. Achievement was measured using two standardized assessments. The whole study was an exercise in data mining. Etc. It is worth taking a look, too, at the authors’ discussion of previous research addressing similar questions and the weaknesses of those studies.

One thing I find interesting is this part of the authors’ conclusion, under the heading of “Implications for educational practice”:

Interventions in education try to increase intrinsic motivation, and hopefully achievement through promoting students [sic] autonomy in instructional setting [sic] (e.g., opportunity to select work partners and assignment tasks; Koller et al., 2001). The present findings could mean that these practices may not be the best approach in the early school years (Cordova & Lepper, 1996; Wigfield & Wentzel, 2007).

That’s it as far as implications, which seems a bit thin. Not even the vanilla suggestion that interventions designed to increase achievement may be better uses of time than those designed to increase motivation? Because that’s the real implication here.


ResearchBlogging.org

Garon-Carrier, G., Boivin, M., Guay, F., Kovas, Y., Dionne, G., Lemelin, J., Séguin, J., Vitaro, F., & Tremblay, R. (2016). Intrinsic Motivation and Achievement in Mathematics in Elementary School: A Longitudinal Investigation of Their Association Child Development, 87 (1), 165-175 DOI: 10.1111/cdev.12458

Educational Achievement and Religiosity

research

educational achievement

I outlined a somewhat speculative argument that would support a prediction that increased religiosity at the social level should have a negative effect on educational achievement here, where I suggested that

Educators surrounded by cultures with higher religiosity—and regardless of their own personal religious orientations—will simply have greater exposure to concerns about moral and spiritual harm that can be wrought by science, in addition to the benefits it can bring.

Such weakened confidence in science may not only directly water down the content of instruction in both science and mathematics—by, for example, diluting science content antagonistic to religious beliefs in published standards and curriculum guides—but could also represent an environment in which it is seen as inartful or even taboo for educators of any stripe to lean on scientific findings and perspectives in order to improve educational outcomes (because nurturing children may be seen to be the provenance of more spiritual and less scientific approaches). Both of these effects, one social, one policy-level, could have a negative effect on achievement.

A new paper, coauthored by renowned evolutionary psychologist David Geary, shows that religiosity at a national level does indeed have a strong negative effect on achievement (r = –0.72, p < 0.001). Yet, Stoet and Geary’s research suggests a different, simpler mechanism at work than the mechanisms I suggested above to explain the connection between religiosity and math and science educational achievement. This mechanism is displacement.

The Displacement Hypothesis

It’s a bit much to give this hypothesis its own section heading—not that it isn’t important, necessarily. It’s just self-explanatory. Religiosity may be correlated with lower educational achievement because people have a finite amount of time and attention, and spending time learning about religion or engaging in religious activities necessarily takes time away from learning math and science.

It is not necessarily the content of the religious beliefs that might influence educational growth (or lack thereof), but that investment of intellectual abilities that support educational development are displaced by other (religious) activities (displacement hypothesis). This follows from Cattell’s (1987) investment theory, with investment shifting from secular education to religious materials rather than shifts from one secular domain (e.g., mathematics) to another (e.g., literature). This hypothesis might help to explain part of the variation in educational performance broadly (i.e., across academic domains), not just in science literacy.

One reason the displacement hypothesis makes sense is that religiosity is as powerfully negatively correlated with achievement in mathematics as it is with science achievement.

The Scattering Hypothesis

But certainly a drawback of the displacement hypothesis is that there are activities we engage in—as unrelated to mathematics and science as religion is—which don’t, as far as we know, correlate strongly negatively with achievement. Physical exercise, for goodness’ sake, is one example of such an activity. Perhaps there is something especially toxic about religiosity as the displacer which deserves our attention.

Maybe religiosity (or, better, a perspective which allows for supernatural explanations or, indeed, unexplainable phenomena) has a diluent or scattering effect on learning. If so, here are two analogies for how that might work:

  • Consider object permanence. Prior to developing the understanding that objects continue to exist once they are out of view, children will almost immediately lose interest in an object that is deliberately hidden from them, even if they were attending to it just moments earlier. Why? Because it is possible (to them) that the object has vanished from existence when you move it out of their view. If it were possible for a 4-month-old to crawl up and look behind the sofa to see that grandma had actually disappeared during a game of peek-a-boo, they would have nothing to wonder about. The disappearance was possible, so why shouldn’t it happen? This possibility is gone once you develop object permanence.
  • Perhaps more relevant, not to mention ominous: climate change. It is well known that religiosity and acceptance of the theory of evolution are negatively correlated. And it turns out there is a strong positive link between evolution denialism and climate-change denialism. How might religiosity support both of these denialisms? Here we can benefit from substituting for ‘religiosity’ some degree of subscription to supernatural explanations: If the universe was made by a deity for us, then how can we be intruders in it, and how could we—by means that do not transgress the laws of this deity—defile it? This seems a perfectly reasonable use of logic, once you have allowed for the possibility of an omniscient benevolence who gifted your species the entire planet you live on.

The two of these together seem pretty bizarre. But I’m sure you catch the drift. In each case, I would argue that the constriction of possibilities—to those supported by naturalistic explanations rather than supernatural ones—is actually a good thing. You are less likely to be prodded to explain how the natural world works when supernatural reasons are perfectly acceptable. And supernaturalism can prevent you from fully appreciating your own existence and the effects it has on the natural world. Under supernaturalism, you can still engage in logical arguments and intellectual activity. You can write books and go to seminars. Your neurons could be firing. But if you’re not thinking about reality, it doesn’t do you any good.

Religiosity or supernaturalism does not make you dumb. But perhaps it has the broader effect of making it more difficult to fasten minds onto reality, as it fills the solution space with only those possibilities that have little bearing on the real world we live in. This would certainly show up in measures of educational achievement.


ResearchBlogging.org
Stoet, G., & Geary, D. (2017). Students in countries with higher levels of religiosity perform lower in science and mathematics Intelligence DOI: 10.1016/j.intell.2017.03.001

Expert Knowledge: Birds and Worms

research

Pay attention to your thought process and how you use expert knowledge as you answer the question below. How do you think very young students would think about it?

Here are some birds and here are some worms. How many more birds than worms are there?

Hudson (1983) found that, among a small group of first-grade children (mean age of 7.0), just 64% completed this type of task correctly. However, when the task was rephrased as follows, all of the students answered correctly.

Here are some birds and here are some worms. Suppose the birds all race over, and each one tries to get a worm. Will every bird get a worm? How many birds won’t get a worm?

This is consistent with adults’ intuitions about the two tasks as well. Members of the G+ mathematics education community were polled on the two birds-and-worms tasks recently, and, as of today, 69% predicted that more students would answer the second one correctly.

Interpret the Results

Still, what can we say about these results? Is it the case that 100% of the students used “their knowledge of correspondence to determine exact numerical differences between disjoint sets”? That is how Hudson describes students’ unanimous success in the second task. The idea seems to be that the knowledge exists; it’s just that a certain magical turn of phrase unlocks and releases this otherwise submerged expertise.

But that expert knowledge is given in the second task: “each one tries to get a worm.” The question paints the picture of one-to-one correspondence, and gives away the procedure to use to determine the difference. So, “their knowledge” is a bit of a stretch, and “used their knowledge” is even more of a stretch, since the task not only sets up a structure but animates its moving parts as well (“suppose the birds all race over”).

Further, questions about whether or not students are using knowledge they possess raise questions about whether or not students are, in fact, determining “exact numerical differences between disjoint sets.” On the contrary, it can be argued that students are simply watching almost all of a movie in their heads (a mental simulation)—a movie for which we have provided the screenplay—and then telling us how it ends (spoiler: 2 birds don’t get a worm). The deeper equivalence between the solution “2” and the response “2” to the question “How many birds won’t get a worm?” is evident only to a knowledgeable onlooker.

Experiment 3

Hudson anticipates some of the skepticism on display above when he introduces the third and last experiment in the series.

It might be argued, success in the Won’t Get task does not require a deep level of mathematical understanding; the children could have obtained the exact numerical differences by mimicking by rote the actions described by the problem context . . . In order to determine more fully the level of children’s understanding of correspondences and numerical differences, a third experiment was carried out that permitted a detailed analysis of children’s strategies for establishing correspondences between disjoint sets.

The wording in the Numerical Differences task of this third experiment, however, did not change. The “won’t get” locutions were still used. Yet, in this experiment, when paying attention to students’ strategies, Hudson observed that most children did not mentally simulate in the way directly suggested by the wording (pairing up the items in a one-to-one correspondence).

This does not defeat the complaint above, though. The fact that a text does not effectively compel the use of a procedure does not mean that it is not the primary influence on correct answers. It still seems more likely than not that participants who failed the “how many more” task simply didn’t have stable, abstract, transferable notions about mathematical difference. And the reformulation represented by the “won’t get” task influenced students to provide a response that was correct.

But this was a correct response to a different question. As adults with expert knowledge, we see the logical and mathematical similarities between the “how many more” and “won’t get” situations, and, thus we are easily fooled into believing that applying skills and knowledge in one task is equivalent to doing so in the other.

expert knowledge


ResearchBlogging.org

Hudson, T. (1983). Correspondences and Numerical Differences between Disjoint Sets Child Development, 54 (1) DOI: 10.2307/1129864



Modulus and Hidden Symmetries

research

A really nice research paper, titled The Hidden Symmetries of the Multiplication Table was posted over in the Math Ed Community yesterday. The key ideas in the article center around (a) the standard multiplication table—with a row of numbers at the top, a column of numbers down the left, and the products of those numbers in the body of the table, and (b) modulus. In particular, what patterns emerge in the standard multiplication table when products are colored by equivalence to \(\mathtt{n \bmod k}\) as \(\mathtt{k}\) is varied?

The little interactive tool below shows a large multiplication table (you can figure out the dimensions), which starts by coloring those products which are equivalent to \(\mathtt{0 \bmod 12}\), meaning those products which, when divided by 12 give a remainder of zero (in other words, multiples of 12).

mod

When you vary \(\mathtt{k}\), you can see some other pretty cool patterns (broken up occasionally by the boring patterns produced by primes). Observing the patterns produced by varying the remainder, \(\mathtt{n}\), is left as an exercise for the reader (and me).

Incidentally, I’ve wired up the “u” and “d” keys, for “up” and “down.” Just click in one of the boxes and press the “u” or “d” key to vary \(\mathtt{k}\) or \(\mathtt{n}\) without having to retype and press Return every time. And definitely go look at the paper linked above. They’ve got some other beautiful images and interesting questions.

modulus


ResearchBlogging.org

Barka, Z. (2017). The Hidden Symmetries of the Multiplication Table Journal of Humanistic Mathematics, 7 (1), 189-203 DOI: 10.5642/jhummath.201701.15

Religiosity and Confidence in Science

research post

religiosity

In response to a question posed on Twitter recently asking why people from the U.K. seemed to show a great deal more interest in applying cognitive science to education than their U.S. counterparts, I suggested, linking to this article, that the differences in the religiosity of the two countries might play a role.

Princeton economist Roland Bénabou led a study, for instance, which found that religiosity and scientific innovation were negatively correlated. Across the world, regions with higher levels of religiosity also had lower levels of scientific and technical innovation—a finding which held even when controlling for income, population, and education. Bénabou commented in this article:

Much comes down to the political power of the religious population in a given location. If it is large enough, it can wield its strength to block new insights. “Disruptive new ideas and practices emanating from science, technical progress or social change are then met with greater resistance and diffuse more slowly,” comments Bénabou, citing everything from attempts to control science textbook content to efforts to cut public funding of certain kinds of research (for instance involving embryonic stem cells or cloned human embryos). In secular places, by contrast, “discoveries and innovations occur faster, and some of this new knowledge inevitably erodes beliefs in any fixed dogma.”

religiosity

The study’s analysis also includes a comparison of U.S. States, which showed a similar negative correlation, as shown at the left.

Importantly, this kind of analysis has nothing to say about the effects of one’s personal religious beliefs on one’s innovativeness or acceptance of science. This song is not about you. It is a sociological analysis which suggests that the religiosity of the culture one finds oneself in (regardless of income and education levels) can have an effect on one’s exposure to scientific innovation.

Religiosity can have this effect at the political and cultural levels while simultaneously having a quite different effect (or no similar effect) at the personal level.

But About That Personal Level

Perhaps more apropos of the original question, researchers have found that individual religiosity is not significantly correlated with interest in science, nor with knowledge of science—but it is significantly negatively correlated with one’s confidence in scientific findings.

More religious individuals report the same interest levels and knowledge of science as less religious people, but they report significantly lower levels of confidence in science. This means that their lack of confidence is not a product of interest or ignorance but represents some unique uneasiness with science. . . .

Going a little further, the researchers provide this quote in the conclusion, which is as perfect an echo of educators’ qualms with education research (that I’ve heard) as can likely be found in literature discussing a completely different topic (emphases mine):

Religious individuals may be fully aware of the potential for material and physical gains through biotechnology, neuroscience, and other scientific advancements. Despite their knowledge of and interest in this potential, they may also hold deep reservations about the moral and spiritual costs involved . . . Religious individuals may interpret [questions about future harms and benefits from science] as involving spiritual and moral harms and benefits. Concerns about these harms and gains are probably moderated by a perception, not entirely unfounded given the relatively secular nature of many in the academic scientific community (Ecklund and Scheitle 2007; Ecklund 2010), that the scientific community does not share the same religious values and therefore may not approach issues such as biotechnology in the same manner as a religious respondent.

It may be, then, that educators surrounded by cultures with higher religiosity—and regardless of their own personal religious orientations—will simply have greater exposure to concerns about moral and spiritual harm that can be wrought by science, in addition to the benefits it can bring. Consistent with my own thinking about the subject, these concerns would be amplified in situations, like education, where science looks to produce effects on human behavior and cognition, especially children’s behavior and cognition.


ResearchBlogging.org
Johnson, D., Scheitle, C., & Ecklund, E. (2015). Individual Religiosity and Orientation towards Science: Reformulating Relationships Sociological Science, 2, 106-124 DOI: 10.15195/v2.a7