From Translations to Slope

If not before, students in 8th grade learn that a translation is a rigid motion that “slides” a point or set of points a certain distance. An important idea here that could stand to be emphasized a lot more is that the translations students study are linear translations—the translations move the set of points along a line. When this is understood prior to looking at slope, it can help with a deeper understanding of slope.

We can see the start of this in action when we play with the simulation below. Type positive numbers less than ten and greater than zero (3 characters max) into the blank boxes and then click on the arrow boxes to set the directions. This will create a translation sequence starting at (0, 0). For example, 9 ↑ 3 ← will continuously translate a point up 9 and left 3 (until it goes out of view). Click on the coordinate plane to run the sequence.

slope

slope
slope

When the sequence is finished, a button should appear that allows you to click to show the line along which the point was translated using a repetition of the translation sequence. Click Clear to draw a new translation sequence (or repeat the one you just did). You can watch a (near) infinite loop if you’d like to put in things like 8 ↑ 8 ↓.

What Is Slope?

slope

The example at right shows a finished sequence of repeated \(\mathtt{(x – 4, y + 6)}\). There’s a whole lot to unpack here, which I won’t do. But, playing around with linear translations in this way can eventually reveal that the vertical and horizontal displacements form a ratio. For example, one can say that for every vertical move up 6 \(\mathtt{(+6)}\), there is a horizontal move left 4 \(\mathtt{(-4)}\). This simplifies to 3 : –2, and you can extend the sequence into the 4th quadrant to show that this is the same line as –3 : 2.

Referring to lines in terms of their slope ratios is pretty close to the finish line as far as slope understanding.

Y = Mx + B

We can ask about the corresponding y-value for an x-value of 5. The answer to this becomes the solution to a proportion, which we can generalize: \[\mathtt{\frac{\color{white}{-}3}{-2} = \frac{y}{5} \quad \rightarrow \quad \frac{\color{white}{-}3}{-2} = \frac{y}{x}}\]

So, we can arrive at \(\mathtt{y = -\frac{3}{2}x}\). By this point, the slope ratio is ready for a special letter, and we can move up to the slope-intercept form. There are all kinds of catches and surprises in this development: zeros, the final b translation of the entire line, etc. But it is certainly an interesting connection between geometry and algebra for middle school, the key idea being that translations always move points along a straight line.

These ideas can essentially run alongside ratio development too, regardless whether the notion of translations is developed formally (there’s not much formality to it, even in 8th grade) or informally. See the Guzinta Math: Comparing Ratios lesson app for some more ideas about connections.


slope

Variable as a Batch of Numbers

There are a couple of interesting lines from the Common Core State Standards for Mathematics (CCSS-M), referencing the meaning of a variable in an equation, which have been on my mind lately. The first is from 6.EE.B.5 and the second from 6.EE.B.6. I have emphasized in red the bits that I think are significant to this post:

Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true?

Understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set.

One of the reasons these are interesting (to me) is that, almost universally as far as I can tell, curricula in 6th grade mathematics (CCSS-M-aligned) limit themselves to equations like \(\mathtt{5+p=21}\) and \(\mathtt{2x=24}\), which only have one solution. So, it’s not possible to talk about the values (plural) that make an equation true; nor is it possible to talk about a variable as representing any number in a specified set when all of our examples will essentially resolve to just one possibility.

How do curricula then cover the two standards above? Well, it’s possible to still hit these two standards when you interpret multiple solutions as something that belong with inequalities. Inequalities are part of the “or” statement at the end of 6.EE.B.5 and can be seen as part of “the purpose at hand” in 6.EE.B.6. This, it seems, is the interpretation that most curricula for 6th grade (again, as far as I can tell) have settled on.

Stipulation and Functions

A reason this may be problematic is that it introduces a stipulation (or continues one, rather)—one which, as far as I can tell, is not effectively stretched out in Grades 7 or 8. That stipulation is this: a variable in an equation represents a single number. We dig this one-solution trench deeper and deeper for two to three years until one day we show them this. In this object, a function, the \(\mathtt{x}\) most certainly does not represent a single value.

variable

But, crucially, \(\mathtt{x}\) doesn’t have to represent a single value even back in 6th grade. That is, when solving an equation in middle school, the variable may wind up to be one number, but we don’t HAVE to make students think that it always will. An equation—even a simple 6th-grade equation—can have no solutions, one solution, or all kinds of different solutions. For example, \(\mathtt{x = x + 2}\) has no real solutions; \(\mathtt{6x = 2(3x)}\) has an infinite number of solutions; whereas the tricky \(\mathtt{x = 2x}\) or \(\mathtt{x = 2x + 2}\) each have one solution apiece. (The latter is a 7th-grade equation, though.)

Once Is Not Enough

The point that an unknown in an equation does not automatically represent one value could be made a little better if solving quadratics or absolute value equations typically preceded an introduction to functions. But even if the content were moved around to fit those topics before functions, the trench is dug mighty deep in middle school. Further, the 8th-grade standard that references different numbers of solutions as we did above, 8.EE.C.7a, is too late, and is often interpreted by curricula as comparing two linear expressions (e.g., \(\mathtt{y = x}\) vs. \(\mathtt{y = x + 2}\); parallel so no solutions), thus keeping the one-solution stipulation ironically intact.

Frequent reminders starting when variables are introduced through the introduction of functions would serve students better, I think, especially when they tackle concepts such as domain and range. The notion that an unknown can represent 0, 1, or multiple values could also help to make linear algebra a bit more approachable when it is introduced.

variable


Check out John Redden’s and Paul Gonzalez-Becerra’s Open Graphing Calculator, which I used in this post.

Inference Calls in Text

research post

Britton and Gülgöz (1991) conducted a study to test whether removing “inference calls” from text would improve retention of the material. Inference calls are locations in text that demand inference from the reader. One simple example from the text used in the study is below:

Air War in the North, 1965

By the Fall of 1964, Americans in both Saigon and Washington had begun to focus on Hanoi as the source of the continuing problem in the South.

There are at least a few inferences that readers need to make here. Readers need to infer the causal link between “the fall of 1964” and “1965,” they are asked to infer that “North” in the title refers to North Vietnam, and they need to infer that “Hanoi” refers to the capital of North Vietnam.

The authors of the study identified 40 such inference calls (using the “Kintsch” computer program) throughout the text and “repaired” them to create a new version called a “principled revision.” Below is their rewrite of the text above, which appeared in the principled revision:

Air War in the North, 1965

By the beginning of 1965, Americans in both Saigon and Washington had begun to focus on Hanoi, capital of North Vietnam, as the source of the continuing problem in the South.

Two other versions (revisions), the details of which you can read about in the study, were also produced. These revisions acted as controls in one way or another for the original text and the principled revision.

Method and Predictions

One hundred seventy college students were randomly assigned one of the four texts–the original or one of the three revisions. The students were asked to read the texts carefully and were informed that they would be tested on the material. Eighty subjects took a free recall test, in which they were asked to write down everything they could remember from the text. The other ninety subjects took a ten-question multiple-choice test on the information explicitly stated in each text.

It’s not at all difficult, given this set up, to anticipate the researchers’ predictions:

We predicted that the principled revision would be retrieved better than the original version on a free-recall test. This was because the different parts of the principled revision were more likely to be linked to each other, so the learner was more likely to have a retrieval route available to use…. Readers of the original version would have to make the inferences themselves for the links to be present, and because some readers will fail to make some inferences, we predicted that there would be more missing links among readers of this version.

This is, indeed, what researchers found. Subjects who read the principled revision recalled significantly more propositions from the text (adjusted mean = 58.6) than did those who read the original version (adjusted mean = 35.5). Researchers’ predictions for the multiple-choice test were also accurate:

On the multiple-choice test of explicit factual information that was present in all versions, we predicted no advantage for the principled revision. Because we always provided the correct answer explicitly as one of the multiple choices, the learner did not have to retrieve this information by following along the links but only had to test for his or her recognition of the information by using the stem and the cue that was presented as one of the response alternatives. Therefore, the extra retrieval routes provided by the principled revision would not help, because according to our hypothesis, retrieval was not required.

Analysis and Principles

Neither of the two results mentioned above are surprising, but the latter is interesting. Although we might say that students “learned more” from the principled revision, subjects in the original and principled groups performed equally well on the multiple-choice test (which tests recognition, as opposed to free recall). As the researchers noted, this result was likely due to the fact that repairing the inference calls provided no advantage to the principled group in recognizing explicit facts, only in connecting ideas in the text.

But the result also suggests that students who were troubled by inference calls in the text just skipped over them. Indeed, subjects who read the original text did not read it at a significantly faster or slower rate than subjects who read the principled revision and both groups read the texts in about the same amount of time. Yet, students who read the original text recalled significantly less than those who read the principled revision.

In repairing the inference calls, the authors of the study identified three principles for better texts:

Principle 1: Make the learner’s job easier by rewriting the sentence so that it repeats, from the previous sentence, the linking word to which it should be linked. Corollary of Principle 1: Whenever the same concept appears in the text, the same term should be used for it.

Principle 2 is to make the learner’s job easier by arranging the parts of each sentence so that (a) the learner first encounters the old part of the sentence, which specifies where that sentence is to be connected to the rest of his or her mental representation; and (b) the learner next encounters the new part of the sentence, which indicates what new information to add to the previously specified location in his or her mental representation.

Principle 3 is to make the learner’s job easier by making explicit any important implicit references; that is, when a concept that is needed later is referred to implicitly, refer to it explicitly if the reader may otherwise miss it.


Reference
Britton, B., & Gülgöz, S. (1991). Using Kintsch’s computational model to improve instructional text: Effects of repairing inference calls on recall and cognitive structures. Journal of Educational Psychology, 83 (3), 329-345 DOI: 10.1037//0022-0663.83.3.329

Because of ‘Common’

To the point, this video is still at the top of my ‘Common Core’ pile, because it highlights what I consider to be the most important argument for the standards: just being on the same page.

I’m seeing this firsthand online in conversations among teachers and product development professionals. For the first time, we’re on the same page. That doesn’t mean we agree–that’s not what “being on the same page” has to mean. It just means in this case that we’re literally looking at the same document. And that’s a big deal.

(Speaking of agreement, to be honest, I’d like to see more ‘moderate traditionalist’ perspectives in education online and elsewhere speak in support of the Common Core. There’s no rock-solid evidentiary reason why the ‘No Telling’ crowd should be completely owning the conversation around the CCSS. The 8 Practice Standards are no less methodologically agnostic than the content standards, unless one assumes (very much incorrectly, of course) that it’s difficult for a teacher to open his mouth and directly share his awesome ‘expert’ knowledge of a content domain without simultaneously demanding cognitive compliance from students. And finally, politically, the national standards movement suffers when it becomes associated with more radical voices.)

Years ago, as I was formulating for myself what eventually became these principles of information design, I was originally somewhat firm on including what I called just the “boundary principle” (I’m not good at naming things). This was motivated by my perception at the time (2007, I think) that in any argument about education, there was no agreed upon way to tell who was right. And so the ‘winner’ was the idea that was said the loudest or the nicest or with the most charisma, or was the idea that squared the best with common wisdom and common ignorance, or it had the most money behind it or greater visibility.

The boundary principle, then, was just my way of saying to myself that none of this should be the case–that even though we need to have arguments (maybe even silly ones from time to time), we need to at least agree that this or that is the right room for the arguments. I think the Common Core can give us that room.

The Revolutionary War Is Over

It is painful to read about people who think that the Common Core Standards are a set of edicts foisted on schools by Bill Gates and Barack Obama. But I get it. And, honestly, I see it as the exact same sentiment as the one that tells us that a teacher’s knowledge and a student’s creativity are mutually exclusive and opposing forces. That sentiment is this: we hate experts.

But that “hatred” is just a matter of perception, as we all know. We can choose to hear the expert’s voice as just another voice at the table (one with a lot of valuable experience and knowledge behind it)–as a strong voice from a partner in dialogue–or we can choose to hear it as selfish and tyrannical. And in situations where we are the experts, we can make the same choice.

I want to choose to see strong and knowledgeable people and ideas as a part of the “common” in education.


common core

Text Coherence and Self-Explanation

text coherence

The authors of the paper (full text) I will discuss here, Ainsworth and Burcham, follow the lead of many researchers, including Danielle McNamara (2001) (full text), in conceiving of text coherence as “the extent to which the relationships between the ideas in a text are explicit.” In addition to this conceptualization, the authors also adopt guidelines from McNamara, et al. (1996) to improve the coherence of the text used in their experiment—a text about the human circulatory system. These guidelines essentially operationalize the meaning of text coherence as understood by many of the researchers examining it:

(1) Replacing a pronoun with a noun when the referent was potentially ambiguous (e.g., replacing ‘it’ with ‘the valves’). (2) Adding descriptive elaborations to link unfamiliar concepts with familiar ones and to provide links with previous information presented in the text (e.g., replacing ‘the ventricles contract’ with ‘the ventricles (the lower chambers of the heart) contract’). (3) Adding connectives to specify the relation between sentences (e.g., therefore, this is because, however, etc.).

Maximal coherence at a global level was achieved by adding topic headers that summarised the content of the text that followed (e.g., ‘The flow of the blood to the body: arteries, arterioles and capillaries’) as well as by adding macropropositions which linked each paragraph to the overall topic (e.g., ‘a similar process occurs from the ventricles to the vessels that carry blood away from the heart’).

Many studies have found that improving text coherence (i.e., improving the “extent to which the relationships between the ideas in the text are made explicit”) can improve readers’ memory for the text. Ainsworth and Burcham mention several in their paper, including studies by Kintsch and McKeown and even the study by Britton and Gülgöz that I wrote up here.

What Britton and Gülgöz find is that when “inference calls”—locations in text that demand some kind of inference from the reader—are “repaired,” subjects’ recall of a text is significantly improved over that of a control group. These results may sum up the advantages seen across research studies in improving text coherence: in general, although there are certainly very few if any simple, straightforward, unimpeachable results available in the small collection of text-coherence studies, researchers consistently find that “making the learner’s job easier” in reading a text by making the text more coherent provides for significant improvement in readers’ learning from that text.

Self-Explanation

text coherence

In some sense, the literature on self-explanation tells a different story from the one that emerges from the text-coherence research. Ainsworth and Burcham define self-explanation in this way:

A self-explanation (shorthand for self-explanation inference) is additional knowledge generated by learners that states something beyond the information they are given to study.

The authors then go on to describe some advantages offered to readers by the self-explanation strategy, according to research:

Self-explanation can help learners actively construct understanding in two ways; it can help learners generate appropriate inferences and it can support their knowledge revision (Chi, 2000). If a text is in someway [sic] incomplete . . . then learners generate inferences to compensate for the inadequacy of the text and to fill gaps in the mental models they are generating. Readers can fill gaps by integrating information across sentences, by relating new knowledge to prior knowledge or by focusing on the meaning of words. Self-explaining can also help in the process of knowledge revision by providing a mechanism by which learners can compare their imperfect mental models to those being presented in the text.

So, whereas text coherence advantages learners by “repairing” (i.e., removing) inferences, self-explanation often produces gains even when—and perhaps especially when—text remains minimally coherent.

Thus, on the one hand, a comprehensive—though shallow—read of the text coherence literature tells us that improved text comprehension can be achieved by “repairing” text incoherence—by closing informational gaps in text. On the other hand, research shows that significant improvements in learning from text can come from employing a strategy of self-explanation during reading—a method that practically feeds off textual incoherence.

What shall we make of this? Which is more important—text coherence or self-explanation? And how do they (or can they) interact, if at all? These are the questions Ainsworth and Burcham attempt to address in their experiment.

The Experiment

Is maximally or minimally coherent text more beneficial to learning when accompanied by self-explanations? Two alternative hypotheses are proposed:

  1. The minimal text condition when accompanied by self-explanation training will present the optimal conditions for learning. Minimal text is hypothesized to increase self-explaining, and self-explanation is known to improve learning. Consequently, low knowledge learners who self-explain will not only be able to overcome the limitations of less coherence but will actively benefit from it as they will have a greater chance to engage in an effective learning strategy.
  2. Maximally coherence [sic] text accompanied by self-explanation will present the optimal condition for learning. Although maximal text is hypothesized to result in less self-explanation than minimal text, when learners do self-explain they will achieve the benefits of both text coherence and self-explanation.

text coherence

Forty-eight undergraduate students were randomly separated into four groups, each of which was assigned either a maximally coherent text (Max) or a minimally coherent text (Min) about the human circulatory system. Each group was also given either self-explanation training (T) or no training at all (NT).

All forty-eight students completed a pretest on the subject matter, read their assigned text using self-explanation or not, and then completed a posttest, which was identical to the pretest. The results for each of the four groups are shown below (the posttest results have been represented using bars, and the pretest results have been represented using line segments).

text coherence

The pretest and matching posttest each had three sections, as shown at the left by the sections of each of the bars. Each of these sections comprised different kinds of questions, but all of the questions assessed knowledge of the textbase, which “contains explicit propositions in the text in a stripped-down form that captures the semantic meaning.”

As you can see, each of the four groups improved dramatically from pretest to posttest, and those subjects who read maximally coherent text (Max) performed slightly better overall than those who read minimally coherent text (Min), no matter whether they used self-explanation during reading (T) or not (NT). However, the effect of text coherence was not statistically significant for any of the three sections of the tests. Self-explanation, on the other hand, did produce significant results, with self-explainers scoring significantly higher on two of the three sections than non–self-explainers.

In addition to the posttest, subjects also completed a test comprised of “implicit questions” and one comprised of “knowledge inference questions” at posttest only. The results for the four groups on these two tests are shown below.

text coherence

Each of these two tests assessed students’ situation models: “The situation model (sometimes called the mental model) is the referential mental world of what the text is about.” The researchers found that self-explainers significantly outperformed non–self-explainers on both tests. Those who read maximally coherent text also outperformed their counterparts (readers given minimally coherent text) on both tests. However, this effect was significant for only one of the tests, and approached significance for the other test (p < 0.08).

Analysis

text coherence

If we stop here, we would be justified in concluding that (a) was the winning hypothesis here. It would seem that self-explanation has a more robust positive effect on learning outcomes than does text coherence. And since the literature tells us that minimally coherent text produces a greater number of self-explanations than does maximally coherent text, minimizing text coherence is desirable for improving learning.

Luckily, Ainsworth and Burcham went further. They coded the types of self-explanations made by participants and analyzed each as it correlated with posttest scores. While they did find that students who read minimally coherent text produced significantly more self-explanations, they also noted this:

Whilst using a self-explanation strategy resulted in an increase in post-test scores for the self-explanations conditions compared to non self-explanation controls, there was no signficant correlation within the self-explanation groups between overall amount of self-explanation and subsequent post-test performance. Rather, results suggest that it is specific types of self-explanations that better predict subsequent test scores.

In particular, for this study, “principle-based explanations” (“[making] reference to the underlying domain principles in an elaborated way”), positive monitoring (“statements indicating that a student . . . understood the material”), and paraphrasing (“reiterating the information presented in the text”) were all significantly positively related to total posttest scores, though only the first of those was considered a real “self-explanation.”

Now, each of those correlations seems pretty ridiculous. They all seem to point in one way or another to the completely unsurprising conclusion that understanding a text pretty well correlates highly with doing well on assessments about the text.

What is interesting, however, is the researchers’ observation that the surplus of self-explanations in the “minimal” groups could be accounted for primarily by three other types of self-explanation, none of which, in and of themselves, showed a signficant positive correlation with total posttest scores: (1) goal-driven explanations (“an explanation that inferred a goal to a particular structure or action”), (2) elaborative explanations (“inferr[ing] information from the sentence in an elaborated manner”), and (3) false self-explanations (self-explanations that were inaccurate).

To put this in perspective, there were only two other types of “self-explanation” coded that I did not mention here. Out of the remaining six, three showed no significant positive correlations with posttest scores (or, in the case of false self-explanations, a significant negative correlation), yet those were the self-explanations that primarily accounted for the significant difference between the minimal and maximal groups.

Or, to put it much more simply, the minimal groups had significantly more self-explanations, but those self-explanations were, in general, either ineffective at raising posttest scores or actually harmful to those scores. It is possible that the significant positive main effect for self-explanation in the study could, in fact, have been greatly helped along by the better self-explanations present in the maximal groups. All of this leads to this conclusion from the researchers:

This study suggests that rather than designing material, which, by its poverty of coherence, will drive novice learners to engage in sense-making activities in order to achieve understanding, we should design well-structured, coherent material and then encourage learners to actively engage with the material by using an effective learning strategy.


ResearchBlogging.org

Reference:
Ainsworth, S., & Burcham, S. (2007). The impact of text coherence on learning by self-explanation Learning and Instruction, 17 (3), 286-303 DOI: 10.1016/j.learninstruc.2007.02.004

Cause and Purpose in Text

A neat study in Educational Studies in Mathematics (link) points to a familiar yet disturbing characteristic of elementary mathematics texts—that they lack explanations that give reasons or describe causes and purposes.

In the study, samples from eighteen different elementary mathematics texts used in the UK were analyzed. Researchers were interested in how often the texts provided “reasons” for the mathematics they presented—that is, how often the texts explained a mathematical idea (or solicited an explanation from students) in terms of purposes and causes:

There is evidence that the strength and number of cause and purpose connections determine the probability of comprehension and the recall of information read (Britton and Graesser, 1996) and can indicate a teacher’s or writer’s concern for reasons (Newton and Newton, 2000). Even when writers withhold reasons and provide activities to help children construct them, they cannot assume that this will happen. In books, a concern for reasons, therefore, is often indicated by their presence. Clauses of cause and purpose can, within limits, serve as indicators of this concern (Britton and Graesser, 1996; Newton and Newton, 2000). . . . Clauses are commonly used as units of textual analysis (Weber, 1990). Amongst these clauses, clauses of cause (typically signalled by words like as, because, since) and purpose (typically signalled by in order to, to, so that) were noted.

Having these data, researchers then compiled the “reason-giving” statements into seven different categories based on their “explanatory purpose.” The results from the study show that mathematics texts seem to lack explanations that have much to do with the content they are supposed to be addressing. (The labels used are my own.)

texts lack explanations graph

The first four categories (working counterclockwise from the largest section) were considered non-mathematical. Forty percent of the clauses in the sample provided “the purpose of and instructions for games and other activities intended to provide experience of a topic”; 22% provided “reasons in stories, real-world examples and applications and in descriptions of the basis of analogies”; 1.3% provided “the purpose of text in terms of its learning aims and objectives and could be described as metadiscourse”; and another 1.3% of the clauses “justified assertions of a non-mathematical nature.” Nearly 65% of “reason-giving” in the texts was non-mathematical.

The next two categories were considered mathematical. Just over 13% of the clauses in the sample provided “the intentions of procedures, operations and algorithms for producing a particular mathematical end”; and just over 9% “attempted to justify [mathematical] assertions (e.g., ‘It is a square number because 5 × 5 = 25’).

Clauses in the final category (symbols) were considered mathematical or non-mathematical, depending on whether or not the symbols in question were mathematical ones. These clauses provided “the purpose of certain words, units, signs, abbreviations, conventions and non-verbal representations.”

This is not to say that writers explain only through clauses of cause and purpose. They may use other devices to the same end and this analysis does not detect them. There is also what the teacher and the child do with the textbook to support understanding, perhaps through practical activity (Entwistle and Smith, 2003). This approach does not detect these directly. The aim of the study, however, is to consider the potential of the children’s text to direct a teacher’s attention to reasons.

It is important to remember that the results do not tell us that, for example, 40% of the clauses in the sample were instructions. They tell us that 40% of the “reason-giving” clauses were used in instructions. The graph above shows how “reason-giving” statements were used in the textbooks. It is, thus, not the case that mathematics texts completely lack explanations, just that they lack explanations (or are in short supply of them) that have much to do with mathematics.

Although these results are generally supportive of the conclusions drawn in the study, they also provide further support, especially in light of these values . . .

Clauses of cause ranged from nil to 3.96% of text (using clauses as the unit) with a mean of 0.68% (s.d. 1.08). Clauses of purpose ranged from nil to 8.03% of text with a mean of 4.77 (s.d. 2.08).

. . . for the long-standing contention that contemporary elementary mathematics textbooks are, primarily, classroom management tools.


ResearchBlogging.org

Newton, D., & Newton, L. (2006). Could Elementary Mathematics Textbooks Help Give Attention to Reasons in the Classroom? Educational Studies in Mathematics, 64 (1), 69-84 DOI: 10.1007/s10649-005-9015-z