Intuition and Domain Knowledge

Can you guess what the graphs below show? I’ll give you a couple of hints: (1) each graph measures performance on a different task, (2) one pair of bars in each graph—left or right—represents participants who used their intuition on the task, while the other pair of bars represents folks who used an analytical approach, and (3) one shading represents participants with low domain knowledge while the other represents participants with high domain knowledge (related to the actual task).

intuition

It will actually help you to take a moment and go ahead and guess how you would assign those labels, given the little information I have provided. Is the left pair of bars in each graph the “intuitive approach” or the “analytical approach”? Are the darker shaded bars in each graph “high knowledge” participants or “low knowledge” participants?

When Can I Trust My Gut?

A 2012 study by Dane, et. al, published in the journal Organizational Behavior and Human Decision Processes, sets out to address the “scarcity of empirical research spotlighting the circumstances in which intuitive decision making is effective relative to analytical decision making.”

To do this, the researchers conducted two experiments, both employing “non-decomposable” tasks—i.e., tasks that required intuitive decision making. The first task was to rate the difficulty (from 1 to 10) of each of a series of recorded basketball shots. The second task involved deciding whether each of a series of designer handbags was fake or authentic.

Why these tasks? A few snippets from the article can help to answer that question:

Following Dane and Pratt (2007, p. 40), we view intuitions as “affectively-charged judgments that arise through rapid, nonconscious, and holistic associations.” That is, the process of intuition, like nonconscious processing more generally, proceeds rapidly, holistically, and associatively (Betsch, 2008; Betsch & Glöckner, 2010; Sinclair, 2010). [Footnote: “This conceptualization of intuition does not imply that the process giving rise to intuition is without structure or method. Indeed, as with analytical thinking, intuitive thinking may operate based on certain rules and principles (see Kruglanski & Gigerenzer, 2011 for further discussion). In the case of intuition, these rules operate largely automatically and outside conscious awareness.”]

As scholars have posited, analytical decision making involves basing decisions on a process in which individuals consciously attend to and manipulate symbolically encoded rules systematically and sequentially (Alter, Oppenheimer, Epley, & Eyre, 2007).

We viewed [the basketball] task as relatively non-decomposable because, to our knowledge, there is no universally accepted decision rule or procedure available to systematically break down and objectively weight the various elements of what makes a given shot difficult or easy.

We viewed [the handbag] task as relatively non-decomposable for two reasons. First, although there are certain features or clues participants could attend to (e.g., the stitching or the style of the handbags), there is not necessarily a single, definitive procedure available to approach this task . . . Second, because participants were not allowed to touch any of the handbags, they could not physically search for what they might believe to be give-away features of a real or fake handbag (e.g., certain tags or patterns inside the handbag).

Results

intuition

As you can see in the graphs at the right (high expertise in gray), there was a fairly significant difference in both tasks between low- and high-knowledge participants when those participants approached the task using their intuition. In contrast, high- and low-knowledge subjects in the analysis condition in each experiment did not show a significant difference in performance. (The decline in performance of the high-knowledge participants from the Intuition to the Analysis conditions was only significant in the handbag experiment.)

It is important to note that subjects in the analysis conditions (i.e., those who approached each task systematically) were not told what factors to look for in carrying out their analyses. For the basketball task, the researchers simply “instructed these participants to develop a list of factors that would determine the difficulty of a basketball shot and told them to base their decisions on the factors they listed.” For the handbag task, “participants in the analysis condition were given 2 min to list the features they would look for to determine whether a given handbag is real or fake and were told to base their decisions on these factors.”

Also consistent across both experiments was the fact that low-knowledge subjects performed better when approaching the tasks systematically than when using their intuition. For high-knowledge subjects, the results were the opposite. They performed better using their intuition than using a systematic analysis (even though the ‘system’ part of ‘systematic’ here was their own system!).

In addition, while the combined effects of approach and domain knowledge were significant, the approach (intuition or analysis) by itself did not have a significant effect on performance one way or the other in either experiment. Domain knowledge, on the other hand, did have a significant effect by itself in the basketball experiment.

Any Takeaways for K–12?

The clearest takeaway for me is that while knowledge and process are both important, knowledge is more important. Even though each of the tasks was more “intuitive” (non-decomposable) than analytical in nature, and even when the approach taken to the task was “intuitive,” knowledge trumped process. Process had no significant effect by itself. Knowing stuff is good.

Second, the results of this study are very much in line with what is called the ‘expertise reversal effect’:

Low-knowledge learners lack schema-based knowledge in the target domain and so this guidance comes from instructional supports, which help reduce the cognitive load associated with novel tasks. If the instruction fails to provide guidance, low-knowledge learners often resort to inefficient problem-solving strategies that overwhelm working memory and increase cognitive load. Thus, low-knowledge learners benefit more from well-guided instruction than from reduced guidance.

In contrast, higher-knowledge learners enter the situation with schema-based knowledge, which provides internal guidance. If additional instructional guidance is provided it can result in the processing of redundant information and increased cognitive load.

Finally, one wonders just who it is we are thinking about more when we complain, especially in math education, that overly systematized knowledge is ruining the creativity and motivation of our students. Are we primarily hearing the complaints of the 20%—who barely even need school—or those of the children who really need the knowledge we have, who need us to teach them?


Audio Postscript


Image mask credit: Greg Williams

ResearchBlogging.org
Dane, E., Rockmann, K., & Pratt, M. (2012). When should I trust my gut? Linking domain expertise to intuitive decision-making effectiveness Organizational Behavior and Human Decision Processes, 119 (2), 187-194 DOI: 10.1016/j.obhdp.2012.07.009

Spatial Reasoning and Pointy Things

spatial reasoning

Try out this spatial reasoning task. The top image at the right shows a 2-dimensional black-and-white representation of a solid figure—the ‘stimulus’—and then 4 ‘targets’: in this case, two solid figures that you can pick up and turn around and investigate and two flat shapes on cards that you can pick up and turn around as well.

You are given these instructions: “Sometimes you can find a solid that matches the shape on the card, and sometimes you can find shapes that match parts of the solid shape. Also, sometimes this shape may be tall and skinny or short and fat. Can you find all of the shapes in front of you that match the image, or are parts of the shape in the image?”

The bottom image shows another version of this task. In this case, the stimulus is not a drawing of a solid figure, but a drawing of a 2D figure. And the targets are different. And of course the directions are different: “Here is an image of a plane shape. Plane shapes sometimes get put together to make solid shapes. There can be more than one shape that has this shape in it. Also, sometimes this shape may be tall and skinny or short and fat. Can you find all of the shapes in front of you that match the image?” Other than that, this spatial reasoning is the same as the first one. Which targets match the stimulus, which don’t, and why?

How do you think first graders (between 6 and 7 years old) would do on spatial reasoning tasks like these?

Let’s Test It—Or Let Other People Test It, Rather

An interesting recent study, just published in The International Journal on Mathematics Education by David Hallowell, Yukari Okamoto, Laura Romo, and Jonna La Joy, analyzed the performance and spatial reasoning of a small group of first graders on 8 visuo-spatial tasks just like the ones illustrated above—four tasks with drawings of 2D stimuli (triangle, square, circle, non-square rectangle) and four with drawings of solid figures as stimuli (rectangle-based pyramid, cylinder, cube, and non-cubic rectangular prism). Students were asked to identify which of the target items were a match to the stimulus item (3 matches and 1 distractor in each task).

The video, kindly provided by the researchers, shows a small sample of students completing these tasks along with some of the reasoning about their answers. When students failed to identify that a correct target was a match (e.g., failed to identify the circle target as a match to the cone stimulus), it was noted as an ‘exclusion error,’ and when students incorrectly identified a target as a match (e.g., identified the cone target as a match to the pyramid stimulus), it was noted as an ‘inclusion error.’

Some Key Results and Discussion

Most of the errors students made were exclusion errors (8 out of 11 error categories). That is, students left out a shape matching a stimulus much more often than they incorrectly included a shape that didn’t match. This result suggests that even young students’ perceptions of geometric shapes and their possible transformations are already somewhat fixed and stable. Together with the finding that most students’ explanations were of the no-response or “I don’t know” variety, we see a picture emerging, consistent with previous research:

This study found a similar trend in children’s ability to explain their reasoning for classifying shapes as found in prior research (e.g., Clements et al., 1999); namely, the most common explanations given by young children on such tasks amount to “I don’t know.” Young children are not confident in what they ought to be looking for when making shape-class judgments, or else they are overconfident in highly salient features like points or large scaling differences between class examples.

Certainly one of the most interesting interpretations of the results, though, involves the salience of ‘pointiness’ as an influence on students’ spatial reasoning. The authors note that 100% of students made the exclusion error of not matching the plane rectangle stimulus with the triangular prism target—seemingly overlooking 3 of its 5 faces:

Children quite often over-interpreted the significance of the points associated with triangles and triangular faces. Given the static-intrinsic characteristics of the triangular-prism manipulative, it is remarkable that no children were able to match the plane-rectangle stimulus and a rectangular face of the triangular prism. . . . This tendency seems especially open to intervention. Teachers might consider taking some time to discuss the common error when students are working with early geometry and other spatial reasoning experiences.

spatial reasoning

I noted here a possible misconception involving ‘pointiness’ on the 2013 STAAR test administered to Texas students. On the item shown at right, 66% of third-grade students chose the correct answer (A), but 30% of students were led astray by Choice B, which would be the correct answer if ‘edges’ referred to ‘points,’ or ‘vertices,’ instead of the segments connecting these vertices.

The salience of ‘pointiness’ is intriguing, too, when one considers that a sensitivity to the pointiness of objects in one’s environment could have lead to a survival advantage among our ancestors, and thus have a genetic foundation. It’s probably best, though, to be very suspicious of and careful with such hyper-adaptationism.

Finally, a result that stands out for me is what I tagged on my initial read-through of the article as ‘base occlusion’ errors—just to feel smart (also, if you’re looking for a band name, there ya go). In tasks involving, for example, the pyramid stimulus (2D black-and-white representation of a pyramid), 77.8% of students failed to match the plane square (the shape of the base of the pyramid). Similarly, the circle was excluded as a match to the cylinder stimulus by 44.4% of students. A reasonable explanation would be that because the bases of these figures are distorted in the 2D representations, students could not identify their base-matching shapes.

However, when the circle was the stimulus, over 40% of students excluded the cone target as a match, even though this figure was always oriented on its side with its base facing the student, and students were allowed to manipulate the target objects with their hands. A similar exclusion occurred with the square stimulus and pyramid target, with a third of students failing to match the two, even though, again, the pyramid was oriented in a way as to reveal the shape of its base.

The researchers’ observations about this tie it again to ideas of fixedness and pointiness—among others:

Many children immediately grasped the apex of the pyramid, setting it down on the square base and proceeding to err on the item. The pointed feature of the pyramid was enough on its own for some children to reject the target as a match. Some children did not explore the targets thoroughly enough to see all the parts of the whole. Others did not know where the relevant parts were on the surface of the whole to make an accurate match. Across the two items [square stimulus and pyramid stimulus], seven children rejected the match because of the quantity of sides. Two children referenced shape analogies in their justifications for the plane square item, one stating that the pyramid “looks like a teepee,” and the other pointing out that “it looks like the top of a house”.

A General Takeaway

Reflecting on this research as a whole, I find myself wondering how often children and adults consider visuo-spatial reasoning to be a kind of reasoning at all—a process of logical and quasi-logical reckoning work, requiring in some cases just-in-time error correction and non-intuitive cognitive effort. It seems to me that the children’s behavior—both on the video and in the experiment—does not reflect such a belief. Participants were not hesitant about their performance in general, and, as the researchers noted, were often overconfident about the salience of their own intuitions (specifically with regard to pointiness and the ‘right’ orientations of solid figures). Their behavior is consistent with a preponderance of exclusion errors in the study.

This suggests that we should embrace the broad challenge, in early geometry instruction, of explicitly directing students’ attentions to the very idea that they can get their mental tentacles all around and inside geometric figures—that these figures are not fixed, indivisible objects, impervious to probing—and, importantly, that students’ intuitions about geometric figures are not inerrant and can be broadened and empowered by effortful thinking.

There are a number of specific ways I could think of filling out that broad outline. What would you do?


ResearchBlogging.org
Hallowell, D., Okamoto, Y., Romo, L., & La Joy, J. (2015). First-graders’ spatial-mathematical reasoning about plane and solid shapes and their representations ZDM DOI: 10.1007/s11858-015-0664-9.

The Crooked Ladder

crooked ladder

When you were a youngster, you almost certainly learned a little about numbers and counting before you got into school: 1, 2, 3, 4, 5, . . . and so forth. This was the first rung on the crooked ladder —the first of your steps toward learning more mathematics.

And it was just about everyone’s. No doubt, while there can be—and are—significant differences in students’ mathematical background knowledge at the age of 5 or 6, virtually everyone that you know or have known or will know started in or will have started in the same place in math: with the positive whole numbers and the operation of counting discrete quantities.

The next few rungs of the ladder we also mostly have in common. There’s comparing positive whole numbers, adding and subtracting with positive whole numbers, whole-number place value, some geometric shapes, and some measurement ideas, like time and length and money. And to the extent that discussions about shapes and measurement involve values, those values are positive whole numbers.

Think about how much time we spend with just discrete whole-number mathematics at the beginning of our lives—at the base of our ladder, the place where it connects with the ground, holding the rest in place. This is not just us working with a specific set of numbers. We learned, and students are learning and will learn how math works here, what the landscape is like, what operations do. This part of the ladder is the one that holds up students’ mathematical skeletons—and it is very much still a part of yours.

I would like you to consider for a moment—and hopefully longer than a moment—the possibility that it is this beginning, this crooked part of the ladder, that is primarily responsible for widespread difficulties with mathematics, for adults and children. I can’t prove this, of course. And I have no research studies to show you. But I’ll try to list below some things that reinforce my confidence in this diagnosis.

And Then We Get to . . .

For starters, there are some very predictable topics that large numbers of students often have major difficulties with when they get to them: operations with negative numbers, fractions, and division—to name just the few I have heard the most about. Well, of course students (and adults) have trouble with these concepts. None of these even exist in the discrete positive whole-number landscape we get so used to.

crooked ladder

Ah, we say, but that’s when we extend the landscape to include these numbers! No, we don’t. We put the new numbers in, but we make those numbers work the same way as in the old landscape—we put more weight on top of the crooked ladder (I’m challenging myself now to mix together as many metaphors as I can). So, multiplication just becomes addition on steroids—super-charged turbo skip-counting of discrete whole number values; division cuts discrete whole-number values into discrete whole-number chunks with whole-number remainders, more skip counting with negative numbers, and fractions are Franken-values whose meaning is dissected into two whole numbers that we count off separately.

“But we teach our students to understand math rather than follow rote—” No, we don’t. I mean, we do. We think this is what we are doing because the crooked ladder is baked into our mathematical DNAs (3! 3 metaphors!). So, we say things like, “I’m not going to teach my students the rules for multiplying and dividing fractions! No invert-and-multiply here, nosiree! I’m going to help them understand why the rules work!” Then what do we do? We map fraction division right on to whole-number counting: how many part-things are in the total-thing? And we call it understanding.

Don’t get me wrong. Teaching for understanding is much better than teaching procedures alone. My point is that most of the metaphors we are compelled to draw on (and the ones students draw on in the absence of instruction) to make this ‘understanding’ work—those involving concrete, discrete whole-number “things”—are brittle. And though they might be valuable, they certainly don’t represent “extending the landscape” in any appreciable way that opens up access to higher-level mathematics. Our very perception of the problem of ‘understanding’ can be flawed because we are developing theories from atop a crooked ladder.

(It’s right about here that I start hearing angry voices in my head, wondering what we’re supposed to do, “bring all those advanced topics down into K–2? Huh?” And this is just what a crooked ladder person would wonder, since he has no experience with any other ladder, and no one else he knows does either. The only possibility he could fathom is to take the rungs from the top and put them on the bottom.)

And About Those Theories . . .

Anyway, secondly, theories. You may have noticed that there are a lot of folk-theories and not-so-folksy theories trying to explain why students and adults seem to have an extra special place in their hearts for sucking at math.

The theory I hear or see the most often—the one that, ironically, doesn’t believe it is ever heard by anyone even though it is practically the only message in town—is that mathematics teaching is too rote, too focused on rules and procedures, obsessed with “telling” kids what’s what instead of giving kids agency and empowerment and self-actualization and letting them, with guidance, discover and remember how mathematics works themselves. It’s too focused on memorization and speed and not enough on deliberate, slow, thoughtful, actual learning. Et cetera.

crooked ladder

I guess that seems like a bunch of different theories, but they really come most of the time packaged together, like a political platform. And they’re all perfectly serviceable mini-theories. I think they’re all true as explanations for why students don’t get into math. But they’re also true in the same way as “You’re sick because you have a cold” is true—tautologically and unhelpfully.

Students and their teachers eventually fall back on the rote and procedural because after a certain point up the crooked ladder, trying to make discrete whole-number chunky counting mathematics work in a continuous, real-number fluid measurement landscape becomes tiresome and inefficient. A few—very few—manage to jump over to a straighter path in the middle of all of this, but a lot of students (and teachers) just kind of check out. They’ll move the pieces around mindlessly, but they’re not going to invest themselves (ever again) in a game they don’t understand and almost always lose. In between these two groups is a group of students who have the resources to compensate for the structural deficiences of their mathematical ladders. Some of these manage to straighten out their paths when they get into college, but for most, compensation (with some rules and rote and some moments of understanding) becomes the way they “do math” for the rest of their lives. These latter two groups will cling to procedures for very different reasons—either because screw it, this doesn’t make any sense, or because whatever, I’ll get this down for the test and maybe I’ll understand it later.

And Their Solutions . . .

The remedy for the inevitable consequences of ascending a crooked ladder—again, the one I hear or see the most often anyway—is to kind of take the adults and the “telling” out of the equation. And to make sure the “understanding” gets back in. And again this is more like a platform of mini-proposals than it is one giant proposed solution. And they work just like “get some rest” works to cure your cold—by creating an environment that allows the actual remedy to be effective.

So leaving students alone is going to be effective to the extent that it does not force students to start up a crooked ladder. But a lot of very different alternatives are going to be effective too. Since the main problem is a kind of sense-making exhaustion inside a landscape that makes no sense, any protocol that helps with the climb is going to work or have no effect. So, higher socioeconomic status, higher expectations, increased early learning, more instructional time, more student engagement and motivation—they’re all going to work to the extent that they sustain momentum. The real problem, that it shouldn’t require that much energy to move up the ladder in the first place, remains.

But Don’t Ask Me for a Solution

I don’t have any concrete things to propose as solutions to the crooked ladder problem, and if I did I wouldn’t write them down anyway. We can’t do anything about our problem until we admit we have one. And while we are all willing to admit of many problems in K–8 education, I don’t think we’re admitting the big one—the ladder at the bottom is crooked. The content, not the processes, needs to change.


Image credits: tanakawho, Jimmie, CileSuns92.

Misconceptions Never Die. They Fade Away.

In a post on my precision principle, I made a fairly humdrum observation about misconceptions around a typical elementary-level geometry question:

Why can we so easily figure out the logics that lead to the incorrect answers? It seems like a silly question, but I mean it to be a serious one. At some level, this should be a bizarre ability, shouldn’t it? . . . . The answer is that we can easily switch back and forth between different “versions” of the truth.

What happened next of course is that researchers Potvin, Masson, Lafortune, and Cyr, having read my blog post, decided to go do actual serious academic work to test my observation. And they seem to agree–non-normative ‘un-scientific’ conceptions about the world do not go away. They share space in our minds with “different versions of the truth.” (I may be misrepresenting the authors’ inspirations and goals for their research somewhat.)

The Test

misconceptions

Participants in the study were 128 14- and 15-year-olds. They were given several trials involving deciding which of two objects “will have the strongest tendency to sink if it were put in a water tank.” The choices for the objects were pictures of balls (on a computer), each made of one of 3 different materials: lead, wood, or “polystyrene (synthetic foam material)” and having one of 3 different sizes: small, medium, or large. The trials were categorized from “very intuitive” to “very counter-intuitive” as shown in the figure from the paper at the right.

Instead of concerning themselves with whether answers were correct or incorrect, however (most of the students got above 90% correct), the authors were interested in the time it took students to complete trials in the different categories. The theory behind this is simple: if students took longer to complete the “counter-intuitive” trials than the “intuitive” ones, it may be because greater-size-greater-sinkability misconceptions were still present.

Results

Not only did counterintuitive trials take longer, trials that were more counterintuitive took longer than those that were less counterintuitive. The mean reaction times in milliseconds for trials in the 5 categories from “very intuitive” to “very counter-intuitive” were 716, 724, 756, 784, and 804. This spectrum of results is healthy evidence in favor of the continued presence of the misconception(s).

So why doesn’t the sheer force of the counterintuitive idea overwhelm students into answering incorrectly? The answer might be inhibition—i.e., being able to suppress “intuitive interference” (their “gut reaction”):

[Lafortune, Masson, & Potvin (2012)] concluded that inhibition is most likely involved in the explanation of the improvement of answers as children grow older (ages 8–14). Other studies that considered accuracy, reaction times, or fMRI data . . . . concluded that inhibition could play an important role in the production of correct answers when anterior knowledge could potentially interfere. The idea that there is a role for the function of inhibition in the production of correct answers is, in our opinion, consistent with the idea of persistence of misconceptions because it necessarily raises the question of what it is that is inhibited.

Further analysis in this study, which cites literature on “negative priming,” shows that inhibition is a good explanation for the increased cognitive effort that led to higher reaction times in the more counterintuitive trials.

So, What’s the Takeaway?

In my post on the precision principle, my answer wasn’t all that helpful: “accuracy within information environments should be maximized.” The authors of this study are much better:

There are multiple perspectives within this research field. Among them, many could be associated with the idea that when conceptual change occurs, initial conceptions “. . . cannot be left intact.”

Ohlsson (2009) might call this category “transformation-of-previous-knowledge” (p.20), and many of the models that belong to it can also be associated to the “classical tradition” of conceptual change, where cognitive conflict is seen as an inevitable and preliminary step. We believe that the main contribution of our study is that it challenges some aspects of these models. Indeed, if initial conceptions survive learning, then the idea of “change”, as it is understood in these models, might have to be reconsidered. Since modifications in the quality of answers appear to be possible, and if initial conceptions persist and coexist with new ones, then learning might be better explained in terms of “reversal of prevalence” then [sic] in terms of change (Potvin, 2013).

This speaks strongly to the idea of exposing students’ false intuitions and misconceptions so that their prevalence may be reversed (a “20%” idea, in my opinion). But it also carries the warning—which the researchers acknowledge—that we should be careful about what we establish as “prevalent” in the first place (an “80%” idea):

Knowing how difficult conceptual change can sometimes be, combined with knowing that conceptions often persist even after instruction, we believe our research informs educators of the crucial importance of good early instruction. The quote “Be very, very careful what you put in that head because you will never, ever get it out” by Thomas Woolsey (1471–1530) seems to be rather timely in this case, even though it was written long ago. Indeed, there is no need to go through the difficult process of “conceptual changes” if there is nothing to change.

This was closer to my meaning when I wrote about maximizing accuracy within information environments. There is no reason I can see to simply resign ourselves to the notion that students must have misconceptions about mathematics. What this study tells us is that once those nasty interfering intuitions are present, they can live somewhat peacefully alongside our “scientific” conceptions. It does not say that we must develop a pedagogy centered around an inevitability of false intuitions.

What’s your takeaway?


ResearchBlogging.org
Potvin, P., Masson, S., Lafortune, S., & Cyr, G. (2014). Persistence of the Intuitive Conception that Heavier Objects Sink More: A Reaction Time Study with Different Levels of Interference International Journal of Science and Mathematics Education, 13 (1), 21-43 DOI: 10.1007/s10763-014-9520-6

Because of ‘Common’

To the point, this video is still at the top of my ‘Common Core’ pile, because it highlights what I consider to be the most important argument for the standards: just being on the same page.

I’m seeing this firsthand online in conversations among teachers and product development professionals. For the first time, we’re on the same page. That doesn’t mean we agree–that’s not what “being on the same page” has to mean. It just means in this case that we’re literally looking at the same document. And that’s a big deal.

(Speaking of agreement, to be honest, I’d like to see more ‘moderate traditionalist’ perspectives in education online and elsewhere speak in support of the Common Core. There’s no rock-solid evidentiary reason why the ‘No Telling’ crowd should be completely owning the conversation around the CCSS. The 8 Practice Standards are no less methodologically agnostic than the content standards, unless one assumes (very much incorrectly, of course) that it’s difficult for a teacher to open his mouth and directly share his awesome ‘expert’ knowledge of a content domain without simultaneously demanding cognitive compliance from students. And finally, politically, the national standards movement suffers when it becomes associated with more radical voices.)

Years ago, as I was formulating for myself what eventually became these principles of information design, I was originally somewhat firm on including what I called just the “boundary principle” (I’m not good at naming things). This was motivated by my perception at the time (2007, I think) that in any argument about education, there was no agreed upon way to tell who was right. And so the ‘winner’ was the idea that was said the loudest or the nicest or with the most charisma, or was the idea that squared the best with common wisdom and common ignorance, or it had the most money behind it or greater visibility.

The boundary principle, then, was just my way of saying to myself that none of this should be the case–that even though we need to have arguments (maybe even silly ones from time to time), we need to at least agree that this or that is the right room for the arguments. I think the Common Core can give us that room.

The Revolutionary War Is Over

It is painful to read about people who think that the Common Core Standards are a set of edicts foisted on schools by Bill Gates and Barack Obama. But I get it. And, honestly, I see it as the exact same sentiment as the one that tells us that a teacher’s knowledge and a student’s creativity are mutually exclusive and opposing forces. That sentiment is this: we hate experts.

But that “hatred” is just a matter of perception, as we all know. We can choose to hear the expert’s voice as just another voice at the table (one with a lot of valuable experience and knowledge behind it)–as a strong voice from a partner in dialogue–or we can choose to hear it as selfish and tyrannical. And in situations where we are the experts, we can make the same choice.

I want to choose to see strong and knowledgeable people and ideas as a part of the “common” in education.


common core

Education-Ish Research

educational

Veteran education researcher Deborah Ball (along with co-author Francesca Forzani) provide some measure of validation for many educators’ frustrations, disappointments, and disaffections with education research. In a paper titled “What Makes Education Research ‘Educational’?” published in December 2007, Ball and Forzani point to education research’s tendency to focus on “phenomena related to education,” rather than “inside educational transactions”:

In recent years, debates about method and evidence have swamped the discourse on education research to the exclusion of the fundamental question of what constitutes education research and what distinguishes it from other domains of scholarship. The panorama of work represented at professional education meetings or in publications is vast and not highly defined. . . Research that is ostensibly “in education” frequently focuses not inside the dynamics of education but on phenomena related to education—racial identity, for example, young children’s conceptions of fairness, or the history of the rise of secondary schools. These topics and others like them are important. Research that focuses on them, however, often does not probe inside the educational process.

Certainly many of us have read terrible “studies” that are, in fact, “inside education,” as we might intuitively understand that term—they are situated in classrooms, they focus on students or teachers or content, etc. Nevertheless, Ball and Forzani make an important point, and the consequences of ignoring problems “inside education” may already be playing out:

Until education researchers turn their attention to problems that exist primarily inside education and until they develop systematically a body of specialized knowledge, other scholars who study questions that bear on educational problems will propose solutions. Because such solutions typically are not based on explanatory analyses of the dynamics of education, the education problems that confront society are likely to remain unsolved.

Us Laypeople

Here is a key point from the introduction to the paper. And although the authors do not explicitly link this point to their criticism of education research, I see no reason to consider the two to be unrelated:

One impediment is that solving educational problems is not thought to demand special expertise. Despite persistent problems of quality, equity, and scale, many Americans seem to believe that work in education requires common sense more than it does the sort of disciplined knowledge and skill that enable work in other fields. Few people would think they could treat a cancer patient, design a safer automobile, or repair a bridge, for these obviously require special skill and expertise. Whether the challenge is recruiting teachers, motivating students to read, or improving the math curriculum, however, many smart people think they know what it takes. Because schooling is a common experience, familiarity masks its complexity. Powell (1980), for example, referred to education as a “fundamentally uncertain profession” about which the perception exists that ingenuity and art matter more than professional knowledge. Yet the fact that educational problems endure despite repeated efforts to solve them suggests the fallacy of this reliance on common sense.

Ball and Forzani here accurately describe the environment in which many of our discussions of and debates about education take place. Instruction itself is shielded from our view by ideas—some of which may indeed be correct—that are too often based on common-sense notions about education. As a result, good questions and reasoned arguments that challenge fundamental assumptions about instruction are brushed aside without consideration.

Keith Devlin makes a point similar to that put forward by Ball and Forzani at the end of his September 2008 article:

While most of us would acknowledge that, while we may fly in airplanes, we are not qualified to pilot one, and while we occasionally seek medical treatment, we would not feel confident diagnosing and treating a sick patient, many people, from politicians to business leaders, and now to bloggers, feel they know best when it comes to providing education to our young, based on nothing more than their having themselves been the recipient of an education.

One may presume, given that Ball and Forzani and then Devlin ascribe this common-sense view of education to “many Americans,” or to “politicians, business leaders, and bloggers,” that these people consider, or are justified in considering, education researchers or teachers or other education professionals to be immune from similar assumptions and common-sense notions. Of course, they don’t and aren’t.

Thus, if education researchers are as susceptible as the rest of us to a “common sense-y” view of instruction impervious to reasoned probing, this may explain, in part, Ball and Forzani’s criticism of education research as dealing with questions “related to” education rather than questions “inside” education. Many researchers may simply avoid questions inside education because they believe that their common sense has already answered them.

Asking Students to Ask Tough Questions Is Comfortable. Now You Try It.

Here Ball and Forzani expand their criticism of education research, pointing to a lack of good research that not only looks at teachers, students, or content but also at the interactions among these three:

Education research frequently focuses not on the interactions among teachers, learners, and content—or among elements that can be viewed as such—but on a particular corner of this dynamic triangle. Researchers investigate teachers’ perceptions of their job or their workplace, for example, or the culture in a particular school or classroom. Many excellent studies focus on students and their attitudes toward school or their beliefs about a particular subject area. Scholars analyze the relationships between school funding and student outcomes, investigate who enrolls in private schools, or conduct international comparisons of secondary school graduation requirements. Such studies can produce insights and information about factors that influence and contribute to education and its improvement, but they do not, on their own, produce knowledge about the dynamic transactions central to the process we call education.

And their critique of the now-famous Tennessee classroom-size study illustrates clearly this further refinement of the authors’ concept of research “inside education”:

Finn and Achilles (1990) investigated whether smaller classes positively affected student achievement in comparison with larger classes. . . . The results suggest that reducing class size affected the instructional dynamic in ways that were productive of improved student learning. The study did not, however, explain how this worked. Improvement might have occurred because teachers were able to pay more attention to individual students. Would the same have been true if the teachers had not known the material adequately? Would reduced class size work better for students at some ages than at others, or better in some subjects than in others?


ResearchBlogging.org

Reference:
Ball, D., & Forzani, F. (2007). 2007 Wallace Foundation Distinguished Lecture–What Makes Education “Research Educational”? Educational Researcher, 36 (9), 529-540 DOI: 10.3102/0013189X07312896

Text Coherence and Self-Explanation

text coherence

The authors of the paper (full text) I will discuss here, Ainsworth and Burcham, follow the lead of many researchers, including Danielle McNamara (2001) (full text), in conceiving of text coherence as “the extent to which the relationships between the ideas in a text are explicit.” In addition to this conceptualization, the authors also adopt guidelines from McNamara, et al. (1996) to improve the coherence of the text used in their experiment—a text about the human circulatory system. These guidelines essentially operationalize the meaning of text coherence as understood by many of the researchers examining it:

(1) Replacing a pronoun with a noun when the referent was potentially ambiguous (e.g., replacing ‘it’ with ‘the valves’). (2) Adding descriptive elaborations to link unfamiliar concepts with familiar ones and to provide links with previous information presented in the text (e.g., replacing ‘the ventricles contract’ with ‘the ventricles (the lower chambers of the heart) contract’). (3) Adding connectives to specify the relation between sentences (e.g., therefore, this is because, however, etc.).

Maximal coherence at a global level was achieved by adding topic headers that summarised the content of the text that followed (e.g., ‘The flow of the blood to the body: arteries, arterioles and capillaries’) as well as by adding macropropositions which linked each paragraph to the overall topic (e.g., ‘a similar process occurs from the ventricles to the vessels that carry blood away from the heart’).

Many studies have found that improving text coherence (i.e., improving the “extent to which the relationships between the ideas in the text are made explicit”) can improve readers’ memory for the text. Ainsworth and Burcham mention several in their paper, including studies by Kintsch and McKeown and even the study by Britton and Gülgöz that I wrote up here.

What Britton and Gülgöz find is that when “inference calls”—locations in text that demand some kind of inference from the reader—are “repaired,” subjects’ recall of a text is significantly improved over that of a control group. These results may sum up the advantages seen across research studies in improving text coherence: in general, although there are certainly very few if any simple, straightforward, unimpeachable results available in the small collection of text-coherence studies, researchers consistently find that “making the learner’s job easier” in reading a text by making the text more coherent provides for significant improvement in readers’ learning from that text.

Self-Explanation

text coherence

In some sense, the literature on self-explanation tells a different story from the one that emerges from the text-coherence research. Ainsworth and Burcham define self-explanation in this way:

A self-explanation (shorthand for self-explanation inference) is additional knowledge generated by learners that states something beyond the information they are given to study.

The authors then go on to describe some advantages offered to readers by the self-explanation strategy, according to research:

Self-explanation can help learners actively construct understanding in two ways; it can help learners generate appropriate inferences and it can support their knowledge revision (Chi, 2000). If a text is in someway [sic] incomplete . . . then learners generate inferences to compensate for the inadequacy of the text and to fill gaps in the mental models they are generating. Readers can fill gaps by integrating information across sentences, by relating new knowledge to prior knowledge or by focusing on the meaning of words. Self-explaining can also help in the process of knowledge revision by providing a mechanism by which learners can compare their imperfect mental models to those being presented in the text.

So, whereas text coherence advantages learners by “repairing” (i.e., removing) inferences, self-explanation often produces gains even when—and perhaps especially when—text remains minimally coherent.

Thus, on the one hand, a comprehensive—though shallow—read of the text coherence literature tells us that improved text comprehension can be achieved by “repairing” text incoherence—by closing informational gaps in text. On the other hand, research shows that significant improvements in learning from text can come from employing a strategy of self-explanation during reading—a method that practically feeds off textual incoherence.

What shall we make of this? Which is more important—text coherence or self-explanation? And how do they (or can they) interact, if at all? These are the questions Ainsworth and Burcham attempt to address in their experiment.

The Experiment

Is maximally or minimally coherent text more beneficial to learning when accompanied by self-explanations? Two alternative hypotheses are proposed:

  1. The minimal text condition when accompanied by self-explanation training will present the optimal conditions for learning. Minimal text is hypothesized to increase self-explaining, and self-explanation is known to improve learning. Consequently, low knowledge learners who self-explain will not only be able to overcome the limitations of less coherence but will actively benefit from it as they will have a greater chance to engage in an effective learning strategy.
  2. Maximally coherence [sic] text accompanied by self-explanation will present the optimal condition for learning. Although maximal text is hypothesized to result in less self-explanation than minimal text, when learners do self-explain they will achieve the benefits of both text coherence and self-explanation.

text coherence

Forty-eight undergraduate students were randomly separated into four groups, each of which was assigned either a maximally coherent text (Max) or a minimally coherent text (Min) about the human circulatory system. Each group was also given either self-explanation training (T) or no training at all (NT).

All forty-eight students completed a pretest on the subject matter, read their assigned text using self-explanation or not, and then completed a posttest, which was identical to the pretest. The results for each of the four groups are shown below (the posttest results have been represented using bars, and the pretest results have been represented using line segments).

text coherence

The pretest and matching posttest each had three sections, as shown at the left by the sections of each of the bars. Each of these sections comprised different kinds of questions, but all of the questions assessed knowledge of the textbase, which “contains explicit propositions in the text in a stripped-down form that captures the semantic meaning.”

As you can see, each of the four groups improved dramatically from pretest to posttest, and those subjects who read maximally coherent text (Max) performed slightly better overall than those who read minimally coherent text (Min), no matter whether they used self-explanation during reading (T) or not (NT). However, the effect of text coherence was not statistically significant for any of the three sections of the tests. Self-explanation, on the other hand, did produce significant results, with self-explainers scoring significantly higher on two of the three sections than non–self-explainers.

In addition to the posttest, subjects also completed a test comprised of “implicit questions” and one comprised of “knowledge inference questions” at posttest only. The results for the four groups on these two tests are shown below.

text coherence

Each of these two tests assessed students’ situation models: “The situation model (sometimes called the mental model) is the referential mental world of what the text is about.” The researchers found that self-explainers significantly outperformed non–self-explainers on both tests. Those who read maximally coherent text also outperformed their counterparts (readers given minimally coherent text) on both tests. However, this effect was significant for only one of the tests, and approached significance for the other test (p < 0.08).

Analysis

text coherence

If we stop here, we would be justified in concluding that (a) was the winning hypothesis here. It would seem that self-explanation has a more robust positive effect on learning outcomes than does text coherence. And since the literature tells us that minimally coherent text produces a greater number of self-explanations than does maximally coherent text, minimizing text coherence is desirable for improving learning.

Luckily, Ainsworth and Burcham went further. They coded the types of self-explanations made by participants and analyzed each as it correlated with posttest scores. While they did find that students who read minimally coherent text produced significantly more self-explanations, they also noted this:

Whilst using a self-explanation strategy resulted in an increase in post-test scores for the self-explanations conditions compared to non self-explanation controls, there was no signficant correlation within the self-explanation groups between overall amount of self-explanation and subsequent post-test performance. Rather, results suggest that it is specific types of self-explanations that better predict subsequent test scores.

In particular, for this study, “principle-based explanations” (“[making] reference to the underlying domain principles in an elaborated way”), positive monitoring (“statements indicating that a student . . . understood the material”), and paraphrasing (“reiterating the information presented in the text”) were all significantly positively related to total posttest scores, though only the first of those was considered a real “self-explanation.”

Now, each of those correlations seems pretty ridiculous. They all seem to point in one way or another to the completely unsurprising conclusion that understanding a text pretty well correlates highly with doing well on assessments about the text.

What is interesting, however, is the researchers’ observation that the surplus of self-explanations in the “minimal” groups could be accounted for primarily by three other types of self-explanation, none of which, in and of themselves, showed a signficant positive correlation with total posttest scores: (1) goal-driven explanations (“an explanation that inferred a goal to a particular structure or action”), (2) elaborative explanations (“inferr[ing] information from the sentence in an elaborated manner”), and (3) false self-explanations (self-explanations that were inaccurate).

To put this in perspective, there were only two other types of “self-explanation” coded that I did not mention here. Out of the remaining six, three showed no significant positive correlations with posttest scores (or, in the case of false self-explanations, a significant negative correlation), yet those were the self-explanations that primarily accounted for the significant difference between the minimal and maximal groups.

Or, to put it much more simply, the minimal groups had significantly more self-explanations, but those self-explanations were, in general, either ineffective at raising posttest scores or actually harmful to those scores. It is possible that the significant positive main effect for self-explanation in the study could, in fact, have been greatly helped along by the better self-explanations present in the maximal groups. All of this leads to this conclusion from the researchers:

This study suggests that rather than designing material, which, by its poverty of coherence, will drive novice learners to engage in sense-making activities in order to achieve understanding, we should design well-structured, coherent material and then encourage learners to actively engage with the material by using an effective learning strategy.


ResearchBlogging.org

Reference:
Ainsworth, S., & Burcham, S. (2007). The impact of text coherence on learning by self-explanation Learning and Instruction, 17 (3), 286-303 DOI: 10.1016/j.learninstruc.2007.02.004

Do We Have a “Bullseye” in Education?

The following is a brief interview Bill Gates did on the Daily Show in 2010. There is an interesting exchange about education starting at about 3:25, related to the notion of a bullseye—a way to know if what we’re doing is working or not.

Interview from January 25, 2010. See 3:25 for education discussion.

I’ve highlighted some of the more interesting comments to me in the transcript below.

Stewart:
Why is it so difficult to get change in the educational system in our country? That seems to be one of the most intractable systems, either because of the boards that are there or the unions or the—what is it about our education system that makes it so difficult to reform?

Gates:
Well, until recently there was no room for experimentation. And charter schools came in—although they’re only a few percent of the schools—and they tried out new models. And a lot of those have worked. Not all of them. But that format showed us some very good ideas, and among those ideas is that you measure teachers, you give them more feedback. And–but people are afraid you’d put in a system that will fire the wrong person or have high overhead, and that’s a legitimate fear. So actually having some districts where it works and then getting the 90% of the teachers who liked it, who thrived, who did improve to share that might allow us to switch—not have capricious things but really help people get better.

Stewart:
But don’t public things like schools and medical care need to have the power to fail, need to fire the wrong person every now and again? It’s never going to be perfect. Aren’t people’s expectations of what it’s supposed to be so precious that you never get change in the positive direction?

Gates:
That’s right. But you have to have a measure. And it’s very tough to agree on a measure. You know, right now the health system rewards the person who just does more treatment, so it’s quantity of output, not the kind of preventative care and measuring and saying, “Okay, you do that well.” Or, “You teach this kid really well.” We haven’t been able to agree on that. And without that it’s a problem.

Jon’s comment—or question, rather—about the education system’s lacking the power to fail struck me as being similar to something I wrote here (apologies for being so gauche as to quote myself):

Education seems unable to help but vacillate between its skepticism, which holds every idea (or none of them) to be right, and its particularism, which holds all of its own ideas to be right. This inability, in the end, makes it nearly impossible for education to decide before the fact that something can be wrong.

Okay, So, Less Philosophical

bullseye in education

To make the similarity less philosophical, I can think of an analogy involving these two dartboards. Using the dartboard on the right—a typical dartboard—we obviously do have the power to fail if our goal is to hit the bullseye. Using the dartboard on the left, we don’t really have the power to fail—not because every throw will be considered a bullseye, but because we have not set out ahead of time what failure and success mean.

An important question we wrestle with in education, specifically with regard to instruction, is What kind of dartboard are we throwing at? Can we explain, before ever throwing a dart, what it means to hit the bullseye and how to get closer to it? If so, then we’re throwing to the right; if not, then we’re throwing to the left.

It seems right—er, correct—to say no, we can’t really describe “bullseye” instruction before we deliver it (particularism) or at all (skepticism), because every student learns differently, there are multiple ways of delivering the same content, etc. For what seems like the same reasons, we can’t really describe “bullseye” ice cream flavors or “bullseye” back massages. In other words, when it comes to instruction, the dartboard on the left seems to be the most appropriate.

Jon challenges this notion by asking, “But don’t public things like schools and medical care need to have the power to fail, . . .? It’s never going to be perfect. Aren’t people’s expectations of what it’s supposed to be so precious that you never get change in the positive direction?” For education—specifically, for instruction—shouldn’t we be using the dartboard on the right, not the one on the left? Shouldn’t we have the courage to draw the bullseye somewhere, even if we know that we will sometimes unfairly exclude some good instruction and unfairly include some bad instruction? I would say yes.

Gates responds: “That’s right. But you have to have a measure. And it’s very tough to agree on a measure.” Or, using the dartboard analogy, we must have a way to decide exactly where to draw the circles, including the bullseye.

I agree that it is difficult to agree on a measure and that quality of instruction is non-quantifiable, but I disagree that we should be looking for something so narrow as a measure (or even group of measures) or something necessarily quantifiable in the first place. What we should be looking for first are clear, specific, acceptable principles of instructional quality.


Can We Know Good from Bad in Education?

methodism devil or angel

In a beautiful article titled One Side Can Be Wrong, Richard Dawkins and Jerry Coyne (among the more famous of subscribers to an epistemological methodism) rather tidily do away with what had then become the “teach the controversy” argument for intelligent design creationism–the notion that IDC should be taught in science classrooms because it offers an alternative to the theory of evolution by natural selection as an explanation for the origins of different species on Earth.

As the authors point out, their stance against “teach the controversy” seems counterintuitively closed-minded, but is demanded of them by the evidence–or, rather, lack of evidence:

So, why are we so sure that intelligent design is not a real scientific theory, worthy of “both sides” treatment? Isn’t that just our personal opinion? It is an opinion shared by the vast majority of professional biologists, but of course science does not proceed by majority vote among scientists. Why isn’t creationism (or its incarnation as intelligent design) just another scientific controversy . . .? Here’s why.

If ID really were a scientific theory, positive evidence for it, gathered through research, would fill peer-reviewed scientific journals. This doesn’t happen. It isn’t that editors refuse to publish ID research. There simply isn’t any ID research to publish. Its advocates bypass normal scientific due process by appealing directly to the non-scientific public and–with great shrewdness–to the government officials they elect.

Intelligent design creationists theorize that “certain features of the universe and of living things are best explained by an intelligent cause,” yet they have never produced any positive evidence for this intelligent cause. It is this lack of evidence–not its character as an alternative explanation–which precludes IDC from acceptance in scientific circles and from “both sides” consideration. As Dawkins and Coyne note in the article linked above, alternative explanations based on actual evidence abound within evolutionary science and are thus far more worthy of debate than is IDC.

Methodists, Particularists, and Apples

apple cross-sections methodism

Yet some may argue that while it may be true that IDC is unscientific, it does not follow from that observation alone that it is wrong. And, indeed, Dawkins and Coyne make no such claim explicitly in the article. Instead (again, one may argue), the authors simply hold up IDC to certain criteria of philosophical empiricism–that knowledge is derived from sense experience in and reasoning about the natural world–and then describe how the theory fares (not well).

Philosopher Roderick Chisholm categorized empiricism of this variety as a form of what he termed “methodism”–one of three possible solutions to the problem of distinguishing what is true from what is not {1}:

(A) What do we know? What is the extent of our knowledge? (B) How are we to decide whether we know? What are the criteria of our knowledge?

If you happen to know the answers to the first of these pairs of questions, you may have some hope of being able to answer the second. Thus, if you happen to know which are the good apples and which are the bad ones, then maybe you could explain to some other person how he could go about deciding whether or not he has a good apple or a bad one. But if you don’t know the answer to the first of these pairs of questions–if you don’t know what things you know or how far your knowledge extends–it is difficult to see how you could possibly figure out an answer to the second.

On the other hand, if, somehow, you already know the answers to the second of these pairs of questions, then you may have some hope of being able to answer the first. Thus, if you happen to have a good set of directions for telling whether apples are good or bad, then maybe you can go about finding a good one–assuming, of course, that there are some good apples to be found. But if you don’t know the answer to the second of these pairs of questions–if you don’t know how to go about deciding whether or not you know, if you don’t know what the criteria of knowing are–it is difficult to see how you could possibly figure out an answer to the first.

Particularists and particularist philosophies (described in the second paragraph above; called epistemological particularisms) decide first which are the good and bad apples–or what is true and what is not, or what we know and what we don’t–and then shop around for a sorting system that reliably turns out results consistent with those decisions. Empiricist, or “methodist,” philosophies, in contrast, (described in the final paragraph above) find their answers to the first question (which are the good apples?) by first answering the second question (how are we to decide whether we have a good or bad apple?).

Thus, Dawkins and Coyne, as loyal empiricists, reject IDC as a bad apple–not, as the argument might go, because they believe it actually is a rotten apple (the authors subscribe to a philosophy which does not permit them to discern that directly) but because the method they have decided upon to sort the apples (quantity or quality of evidence, naturalism, the scientific method, etc.) leads them almost inevitably to this conclusion.

(Our third choice, according to Chisholm, by the way, is skepticism. The skeptic adroitly recognizes that in order to determine whether or not we possess in each case a good or bad apple we require a method to justify our choice and that in order to select a reliable method we need to know the difference, ab initio, between good and bad apples, and she therefore concludes that there is no way to decide.)

The Truth Is Out There

i want to believe poster methodism

It is an admixture of the skeptic’s and particularist’s philosophies which most closely resembles the weak orthodoxy of American K-8 education–a system (if one could be so generous as to describe it as such) often characterized, certainly not thoroughly but perhaps most aptly, by its ability to not distinguish between good and bad apples.

One can see evidence for this strange orthodoxy not only in the way the “system” administers itself, but in more abstract ways as well. This, for example, is part of a “skeptico-particularist” argument that is, in one form or another, very popular among professional educators as a defense against the evils of generalization and standardization:

It is simply not possible to prove that an approach to teaching and learning will be effective before the fact.

Education as a scientific discipline is a young field with an active community focused on R&D–research on learning coupled with the development of new and better curriculum materials. In truth, however, much of the work is better described as D&R–informed and thoughtful development followed by careful analysis of results. It is in the nature of the enterprise that we cannot discover what works before we create the what.

Similarly, James and Dewey—two of educational psychology’s founding philosophers—though not self-identified skeptics or “particularists” in any strict or relevant sense, were not exactly warm to a “methodist” approach to discerning truth. John J. McDermott said it this way {2}:

James has a name for . . . methodological anality. He calls it “vicious intellectualism” by which we define A as that which not only is what it is but cannot be other. Proceeding this way, answers abound and clarity holds sway. Missing is surprise, novelty, the wider relational fabric, often riven with rich meanings found on the edge, behind, around, under, over the designated, prearranged conceptual placeholders. Percepts are what count, and the attendant ambiguity in all matters important, presage more and deeper meaning not less. Following John Dewey, method is subsequent and consequent to experience, to inquiry. Method can help fund and warrant experience, but it does not grasp our doings and undergoings in their natural habitat. For that, we must begin with and experimentally trust our affections–dare I say it, trust our feelings. They may cause trouble, but they never lie.

The surest evidence, however, for the antagonism between Chisholm’s “methodism” and American education can be found through experience and observation. A small helping only of each of these is enough, I think, to convince most rational people that at nearly every turn, education steers itself craftily away from the advisement of all but the vaguest and easiest criteria: How shall we teach? What shall we teach? Who shall we reward? punish? What shall we value and devalue? Education will provide answers to these questions or it won’t, but it never has a way to decide, a methodology, a set of criteria it refers to.

To come, finally, full circle, education seems unable to help but vacillate between its skepticism, which holds every idea (or none of them) to be right, and its particularism, which holds all of its own ideas to be right. This inability, in the end, makes it nearly impossible for education to decide before the fact that something can be wrong.


References:

1. Chisholm, R.M. (1982). The problem of the criterion. In L. Pojman (Ed.), The theory of knowledge, second edition (pp. 26-35). Belmont, CA: Wadsworth.

2. McDermott, J. (2003). Hast Any Philosophy in Thee, Shepherd? Educational Psychologist, 38 (3), 133-136 DOI: 10.1207/S15326985EP3803_2

The Pedagogical Landscape

17 Dec 2014

persuasion problem

Neuroscientist Sam Harris argues in The Moral Landscape that debates about morality can be grounded in scientific thinking and that widespread thinking to the contrary constitutes a persuasion problem:

Questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

The same is true of teaching (or rather, educating). There are practices that must be better and worse than others, and content that must be objectively better and worse than other content. Some viewpoints are right and some are wrong (to a greater or lesser degree), and some are slightly better or in need of a little improvement. Questions within domains of human interest like education and morality do not have to be simple or clinical or black-and-white in order for their possible answers to be placed on a widely and responsibly endorsed (and interpolated) scale of bad-good-better-best.

Yet, what I have just stated is seen by many people as a fundamental assumption about pedagogical quality that is only ‘correct’ to the extent that it is collectively condoned: If it is the case that a person does not care about ‘pedagogical quality’ or if she believes that only she can ever completely and fairly judge her own pedagogical quality, there is no way we can ever convince her, either in practice or in principle, that she is wrong from an objective standpoint.

This is what Harris calls The Persuasion Problem with regard to his arguments about morality. I shall borrow the term, along with the relevant parts of his response:

I believe all of these challenges are the product of philosophical confusion. The simplest way to see this is by analogy to medicine and the mysterious quantity we call ‘health.’ Let’s swap . . . [‘pedagogical quality’] for ‘health’ and see how things look:

Here’s how it would look: “If it is the case that a person does not care about health or if she believes that only she can ever completely and fairly judge her own health, there is no way we can convince her that she is wrong from an objective standpoint.” This is of course absurd. Clearly there are scientific truths to be known about health—and we can fail to know them, to our great detriment. This is a fact. And yet, it is possible for people to deny this fact, or to have perverse and even self-destructive ideas about how to live. Needless to say, it can be fruitless to argue with such people. Does this mean we have a Persuasion Problem with respect to medicine? No. Christian Scientists, homeopaths, voodoo priests, and the legions of the confused don’t get to vote on the principles of medicine.

The same goes for education (along with the persuasion problem). Of course there are nutty people out there with nutty ideas about how to teach, how to structure schools, what priorities we should have with regard to education, and what counts for quality content (and some of these people and ideas might find themselves in the mainstream on any given occasion—they might be you or me). And of course the issues are complex and amorphous. But there is clearly a quality spectrum in education and good arguments to be made for improvement. Throwing this spectrum into sharp relief and then finding these arguments remain major first steps for a science of education.

Harris’s argument regarding the persuasion problem is one that resonates with thinking aligned with critical realism. This quote from Enlightened Common Sense: The Philosophy of Critical Realism, provides a sample of that:

The social world is characterised by the complete absence of laws and explanations conforming to the positivist canon. In response to this, positivists plead that the social world is much more complex than the natural world (interactionism) or that the laws that govern it can only be identified at some more basic, for example, neurophysiological level (reductionism). But positivists are wrong to expect the social sciences to find constant conjunctions in the human world, for they are scarce enough in the natural realm; while hermeneuticists are wrong to conclude from the absence of such conjunctions that the human sciences must be radically different from the natural sciences. Closed systems cannot be established artificially in the human world. But this does not mean that one cannot identify generative mechanisms at work in specific contexts or construct theoretical generalisations for them; or that there are no criteria for theory choice or development, or that there are no empirical controls on theory. Rather, it follows from the absence of closed systems that criteria for choice and development of theory will be explanatory, not predictive, and that empirical controls will turn on the extent to which events indicate or reveal the presence of structures.