Categories
Teaching

27 Characteristics Of Authentic Assessment

27 Characteristics Of Authentic Assessment

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

by Grant WigginsAuthentic Education

Ed note: This post by the late Grant Wiggins has been republished from a previous post. Please visit Authentic Education to support the work his wife and peers are continuing.

What is “authentic assessment”?

Almost 25 years ago, I wrote a widely-read and discussed paper that was entitled: “A True Test: Toward More Authentic and Equitable Assessment” that was published in the Phi Delta Kappan. I believe the phrase was my coining, made when I worked with Ted Sizer at the Coalition of Essential Schools, as a way of describing “true” tests as opposed to merely academic and unrealistic school tests. I first used the phrase in print in an article for Educational Leadership entitled “Teaching to the (Authentic) Test” in the April 1989 issue.

(My colleague from the Advisory Board of the Coalition of Essential Schools, Fred Newmann, was the first to use the phrase in a book, a pamphlet for NASSP in 1988 entitled Beyond standardized testing: Assessing authentic academic achievement in secondary schools. His work in the Chicago public schools provided significant findings about the power of working this way.)

So, it has been with some interest (and occasional eye-rolling, as befits an old guy who has been through this many times before) that I have followed a lengthy back and forth argument in social media recently as to the meaning of ‘authentic’ and, especially, the idea of ‘authentic assessment’ in mathematics.

The debate – especially in math – has to do with a simple question: does ‘authentic’ assessment mean the same thing as ‘hands-on’ or ‘real-world’ assessment? (I’ll speak to those terms momentarily). In other words, in math does the aim of so-called “authentic” assessment rule in or rule out the use of ‘pure’ math problems in such assessments?

A number of math teachers resist the idea of authentic assessment because to them it inherently excludes the idea of assessing pure mathematical ability. (Dan Meyer cheekily refers to ‘fake-world’ math as a way of pushing the point effectively.)

Put the other way around, many people are defining ‘authentic’ as ‘hands-on’ and practical. In which case, pure math problems are ruled out.

 See also 10 Characteristics Of A Highly Effective Learning Environment

The Original Argument

In the Kappan article I wrote as follows:

Authentic tests are representative challenges within a given discipline. They are designed to emphasize realistic (but fair) complexity; they stress depth more than breadth. In doing so, they must necessarily involve somewhat ambiguous, ill-structured tasks or problems.

Notice that I implicitly addressed mathematics here by referring to ‘ill-structured tasks or problems.’ More generally, I referred to “representative challenges within a discipline.” And notice that I do not say that it must be hands-on or real-world work. It certainly CAN be hands-on but it need not be. This line of argument was intentional on my part, given the issue discussed above.

In short, I was writing already mindful of the critique I, too, had heard from teachers of mathematics, logic, language, cosmology and other ‘pure’ as opposed to ‘applied’ sciences in response to early drafts of my article. So, I crafted the definition deliberately to ensure that ‘authentic’ was NOT conflated with ‘hands-on’ or ‘real-world’ tasks.

My favorite example of a “pure” HS math assessment task involves the Pythagorean Theorem:

We all know that A2 + B2 = C2.  But think about the literal meaning for a minute: The area of the square on side A + the area of the square on side B = the area of the square on side C. So here’s the question: does the figure we draw on each side have to be a square? Might a more generalizable version of the theorem hold true? For example: Is it true or not that the area of the rhombus on side A + the area of the rhombus on side B = the area of the rhombus on side C? Experiment with this and other figures.

From your experiments, what can you generalize about a more general version of the theorem?

This is ‘doing’ real mathematics: looking for more general/powerful/concise relationships and patterns – and using imagination and rigorous argument to do so, not just plug and chug. (There are some interesting and surprising answers to this task, by the way.)

The Definition Of Hands-On & Real-World

While I don’t think there are universally-accepted definitions of ‘real-world’ and ‘hands-on’ the similarities and differences seem straightforward enough to me.

A ‘hands-on’ task, as the phrase suggests, is to be distinguished from a merely paper-and-pencil exam-like task. You build stuff; you create works; you get your hands dirty; you perform. (Note therefore, that ‘performance assessment’ is not quite the same as ‘authentic assessment’).

In robotics, life-saving, and business courses we regularly see students create and use learning as a demonstration of (practical as well as theoretical) understanding.

A ‘real-world’ task is slightly different. There may or may not be mere writing or a hands-on task, but the assessment is meant to focus on the impact of one’s work in real or realistic contexts. A real-world task requires students to deal with the messiness of real or simulated settings, purposes, and audience (as opposed to a simplified and “clean” academic task to no audience but the teacher-evaluator).

So, a real-world task might ask the student to apply for a real or simulated job, perform for the local community, raise funds and grow a business as part of a business class, make simulated travel reservations in French to a native French speaker on the phone, etc.

Here is the (slightly edited) chart from the Educational Leadership article describing all the criteria that might bear on authentic assessment. It now seems unwieldy and off in places to me, but I think readers might benefit from pondering each element I proposed 25 years ago:

27 Characteristics Of Authentic Assessment

Authentic assessments –

A. Structure & Logistics

1. Are more appropriately public; involve an audience, panel, etc.

2. Do not rely on unrealistic and arbitrary time constraints

3. Offer known, not secret, questions or tasks.

4. Are not one-shot – more like portfolios or a season of games

5. Involve some collaboration with others

6. Recur – and are worth retaking

7. Make feedback to students so central that school structures and policies are modified to support them

B. Intellectual Design Features

1. Are “essential” – not contrived or arbitrary just to shake out a grade

2. Are enabling, pointing the student toward more sophisticated and important use of skills and knowledge

3. Are contextualized and complex, not atomized into isolated objectives

4. Involve the students’ own research

5. Assess student habits and repertories, not mere recall or plug-in.

6. Are representative challenges of a field or subject

7. Are engaging and educational

8. Involve somewhat ambiguous (ill-structures) tasks or problems

C. Grading and Scoring

1. Involve criteria that assess essentials, not merely what is easily scores

2. Are not graded on a curve, but in reference to legitimate performance standards or benchmarks

3. Involve transparent, de-mystified expectations

4. Make self-assessment part of the assessment

5. Use a multi-faceted analytic trait scoring system instead of one holistic or aggregate grade

6. Reflect coherent and stable school standards

D. Fairness

1. identify (perhaps hidden) strengths [not just reveal deficits]

2. Strike a balance between honoring achievement while mindful of fortunate prior experience or training [that can make the assessment invalid]

3. Minimize needless, unfair, and demoralizing comparisons of students to one another

4. Allow appropriate room for student styles and interests [ – some element of choice]

5. Can be attempted by all students via available scaffolding or prompting as needed [with such prompting reflected in the ultimate scoring]

6. Have perceived value to the students being assessed.

I trust that this at least clarifies some of the ideas and resolves the current dispute, at least from my perspective. Happy to hear from those of you with questions, concerns, or counter-definitions and counter-examples.

This article first appeared on Grant’s personal blog; 27 Characteristics Of Authentic Assessment; image attribution flickr user woodleywonderworks

Categories
Teaching

Academic Standards: Breaking Whole Things Into Broken Bits

eilonwy77

by Grant Wiggins, Ph.D, Authentic Education

In the just-released Math Publisher’s Criteria document on the Common Core Standards, the authors say this about (bad) curricular decision-making:

“’Fragmenting the Standards into individual standards, or individual bits of standards … produces a sum of parts that is decidedly less than the whole’ (Appendix from the K-8 Publishers’ Criteria). Breaking down standards poses a threat to the focus and coherence of the Standards. It is sometimes helpful or necessary to isolate a part of a compound standard for instruction or assessment, but not always, and not at the expense of the Standards as a whole.

“A drive to break the Standards down into ‘microstandards’ risks making the checklist mentality even worse than it is today. Microstandards would also make it easier for microtasks and microlessons to drive out extended tasks and deep learning. Finally, microstandards could allow for micromanagement: Picture teachers and students being held accountable for ever more discrete performances. If it is bad today when principals force teachers to write the standard of the day on the board, think of how it would be if every single standard turns into three, six, or a dozen or more microstandards. If the Standards are like a tree, then microstandards are like twigs. You can’t build a tree out of twigs, but you can use twigs as kindling to burn down a tree.”

Hallelulah! As readers and friends know, I have been harping on this problem for decades, and especially with regard to mathematics instruction and assessment. So, to have such a clear statement is welcome. Not that I am naïve enough, however, to think that a mere statement will alter some people’s wrong-headed thinking and habits. But this should catch some attention.

Teaching Bits Out Of Context.

This problem of turning everything into “microstandards” is a problem of longstanding in education. One might even say it is the original sin in curriculum design. Take a complex whole, divide into the simplest and most reductionist bits, string them together and call it a curriculum. Though well-intentioned, it leads to fractured, boring, and useless learning of superficial bits.

Here is John Dewey on the problem – and the false analogy with physical taking apart that it is based on – writing over 100 years ago:

“Only as we need to use just that aspect of the original situation as a tool of grasping something perplexing or obscure in another situation, do we abstract or detach the quality so that it becomes individualized…. If the element thus selected clears up what is otherwise obscure in the new experience, if it settles what is uncertain, it thereby itself gains in positiveness and definiteness of meaning.

Even when it is definitely stated that intellectual and Mental physical analyses are different sorts of operations, intellectual analysis is often treated after the analogy of physical; as if it were the breaking up of a whole into all its constituent parts in the mind instead of in space. As nobody can possibly tell what breaking a whole into its parts in the mind means, this conception leads to the further notion that logical analysis is a mere enumeration and listing of all conceivable qualities and relations.

The influence upon education of this conception has been very great. Every subject in the curriculum has passed through — or still remains in — what may be called the phase of anatomical or morphological method: the stage in which understanding the subject is thought to consist of multiplying distinctions of quality, form, relation, and so on, and attaching some name to each distinguished element. In normal growth, specific properties are emphasized and so individualized only when they serve to clear up a present difficulty. Only as they are involved in judging some specific situation is there any motive or use for analyses, i.e. for emphasis upon some element or relation as peculiarly significant. [emphasis added]”

Dewey’s point is clear even if the writing is dense: so-called analysis of things into bits for the purpose of learning the whole has no basis in cognitive psychology or epistemology. Indeed, as he says just after, it is a case of putting the cart before the horse. Distinctions are made when we need them in the service of understanding. Learning an endless array of distinctions and their names yields no meaning and merely verbal knowledge.

To put it graphically, this is how driver’s education would look if we followed such logic:

carparts

Mastery Learning projects made this mistake in droves in the 70s and 80s. The idea of backward design from competency was bastardized into a Learn-All-the-Bits, and we’ll call it mastery if you get over 80% on all the quizzes. The same thing is happening today in many projects, like RISC, that call themselves Competency-based. All these projects are is a march through endless micro-standards. In some projects, students cannot “advance” to the next “level” until and unless they test out on “interim assessments” of such knowledge of lots of little bits out of context.

That’s not only dumb but immoral: lots of great performers might not have mastered some of the bits first. As I have long said, it is like not allowing a kid to play soccer until they have mastered 100 paper and pencil quizzes on each soccer bit.  And my blog readers can easily understand my harangue against Algebra I courses as a textbook (!) example of such a mistake.

Related Error: Fixation On Premature Technical vocabulary.

A similar problem is to assume that students need, first, to learn all sorts of technical vocabulary in learning the little bits. Here is a dreadful example from a middle school science book that we are working with as part of a curriculum-writing project for a client.

The book’s topic and title is Sound and Light. By page 10, Chapter 1, the following terms have been (needlessly) introduced to discuss waves: Transverse, mechanical, troughs, longitudinal, compressions, rarefactions. By page 12 they also add amplitude, wavelength, frequency. Middle School! The chapter ends with 3 formulas utterly out of context.

The chapter assessment? Recall the terms and plug in some data into the formulas, of course! No discussion of why we might be interested in waves; no discussion of the link to key physics questions, like the quest to understand light and sound; no discussion as to why or how one might use these distinctions or formulae to learn something interesting.

Absurdly, all of this is introduced before any observations and experiments with waves. This is not only how waves are introduced to the student but how science is implicitly to be understood – as the picayune naming of bits of experience. Why would a young middle schooler become interested in science through such an introduction?

Or as Dewey famously described this mistake in Democracy and Education:

“There is a strong temptation to assume that presenting subject matter in its perfected form provides a royal road to learning. What more natural than to suppose that the immature can be saved time and energy, and be protected from needless error by commencing where competent inquirers have left off? The outcome is written large in the history of education. Pupils begin their study . . . with texts in which the subject is organized into topics according to the order of the specialist. Technical concepts and their definitions are introduced at the outset.

Laws are introduced at an early stage, with at best a few indications of the way in which they were arrived at. . . . The pupil learns symbols without the key to their meaning. He acquires a technical body of information without ability to trace its connections [to what] is familiar—often he acquires simply a vocabulary (p. 220).”

Conclusion

So, please: let this be a warning to all course designers, curriculum writers, and (especially) textbook designers. The sum of the itty bitty parts is not a whole, ever. You need to understand that movement toward interest in and mastery of a complex whole requires designing backward from – and never losing sight of! – the complex whole and the interesting questions related to it.

We do it right much of the time in soccer, immersion approaches to foreign language, art, and philosophy. Math, history, many science courses, and many foreign language courses get it hopelessly wrong – making the same mistake, yet again, that Dewey wrote about over 100 years ago. It’s way past time to avoid this unthinking error.

This post was originally published on Grant’s personal blog; image attribution flickr user eilonwy77

Categories
Teaching

How Student Work Models Make Rubrics More Effective

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

How Student Work Models Make Rubrics More Effective

by Grant Wiggins, Authentic Education

It was not that long ago when I did a workshop where the staff from the Dodge Foundation (who were funding my work at the time) took me aside at the break because they were concerned with my constant use of a term that they had never heard of – rubric. Those of us promoting their use over the past 20 years can now smile and take satisfaction in the fact that the term is now familiar and the use of rubrics is commonplace world-wide.

Alas, as with other good ideas, there has been some stupidification of this tool. I have seen unwise use of rubrics and countless poorly-written ones: invalid criteria, unclear descriptors, lack of parallelism across scores, etc. But the most basic error is the use of rubrics without models. Without models to validate and ground them, rubrics are too vague and nowhere near as helpful to students as they might be.

Consider how a valid rubric is born. It summarizes what a range of concrete works looks like as reflections of a complex performance goal. Note two key words: complex and summarizes. All complex performance evaluation requires a judgment of quality in terms of one or more criteria, whether we are considering essays, diving, or wine. The rubric is a summary that generalizes from lots and lots of samples (sometimes called models, exemplars, or anchors) across the range of quality, in response to a performance demand. The rubric thus serves as a quick reminder of what all the specific samples of work look like across a range of quality.

Cast as a process, the rubric is not the first thing generated, therefore; it is one of the last things generated in the original anchoring process. Once the task has been given and the work is collected, one or more judges sorts the work into piles while working from some general criteria. In an essay, we care about such criteria as: valid reasoning, appropriate facts, clarity, etc. So, the judges sort each sample into growing piles that reflect a continuum of quality: this pile has the best essays in it; that pile contains work that does not quite meet the criteria as well as the top pile, etc.

Once all the papers have been scored, the judge(s) then ask: OK, how do we describe each pile in summary form, to explain to students and other interested parties the difference in work quality across the piles, and how each pile differs from the other piles? The answer is the rubric.

Huh? Grant, are you saying the assessment is made before there are rubrics? Isn’t that backward?

No, not in the first assessment. Otherwise, how would there ever be a first assessment? It’s like the famous line from Justice Potter Stewart: I can’t define pornography, but I know it when I see it. That’s how it works in any judgment. The judgments come first; then, we turn our somewhat inchoate judgments into fleshed-out descriptors – rules that rationalize judgment – into a more general and valid system. Helpful rubrics offer rich descriptors that clarify for learners the qualities sought; poor rubrics amount to no more than saying that Excellent is better than Good, etc.

Once we have the rubrics, of course, we can use them in future assessments of the same or similar performance. But here is where the trouble starts. A teacher borrows a rubric from a teacher who borrowed the rubric, etc. Neither the current teacher nor students know what the language of the rubric really means in the concrete because the rubric has become unmoored from the models that anchor and validate it. In a very real sense, then, neither teacher nor students can use the rubric to calibrate their work if there are no models to refer to.

Look at it from the kids’ point of view. How helpful is the following descriptor in letting me know exactly what I have to do to get the highest score? And how does excellence differ from merely adequate? (These two descriptors actually come from a state writing assessment):

5. This is an excellent piece of writing. The prompt is directly addressed, and the response is clearly adapted to audience and purpose. It is very well-developed, containing strong ideas, examples and details. The response, using a clearly evident organizational plan, engages the reader with a unified and coherent sequence and structure of ideas. The response consistently uses a variety of sentence structures, effective word choices and an engaging style.

3. This is an adequate piece of writing. Although the prompt is generally addressed and the response shows an awareness of audience and purpose, the response’s overall plan shows inconsistencies. Although the response contains ideas, examples and details, they are repetitive, unevenly developed and occasionally inappropriate. The response, using an acceptable organizational plan, presents the reader with a generally unified and coherent sequence and structure of ideas. The response occasionally uses a variety of sentence structures, appropriate word choices and an effective style.

Do you see the problem more clearly? Without the models I cannot be sure what, precisely and specifically, each of the key criteria – well-developed, strong ideas, clearly-evident organizational plan, engages the reader, etc. – really mean.  I may now know the criteria, but without the models I don’t really know the performance standard; I don’t know how ‘strong’ is strong enough, nor do I know if my ideas are ‘inappropriate’: There is no way I can know without examples of strong vs. not strong  and appropriate vs. inappropriate (with similar contrasts needed for each key criterion.)

In fact, without the models, you might say that this paper is ‘well-developed’ while I might say it is ‘unevenly developed.’ That’s the role of models; that’s why we call them ‘anchors’ because they anchor the criteria in terms of a specific performance standard.

Knowing the criteria is better than nothing, for sure, but it is nowhere near as helpful as having both rubric and models. This same argument applies to the Common Core Standards: we don’t know what they mean until we see work samples that meet vs. don’t meet the standards. It is thus a serious error that the existing samples for Writing exist in the Appendix to the Standards where far too few teachers are likely to find them.

This explains why the AP program, the IB program, and state writing assessments show samples of student work – and often also provide commentary. That’s really the only way the inherently-general language of the rubric can be made fully transparent and understood by a user – and such transparency of goals is the true aim of rubrics.

This is why the most effective teachers not only purvey models but ask students to study and contrast them so as to better understand the performance standards and criteria in the concrete. In effect, by studying the models, the student simulates the original anchoring process and stands a far better chance of internalizing and thus independently meeting the standard.

But doesn’t the use of models inhibit creativity and foster drearily formulaic performance?

This is a very common question in workshops. Indeed, it was posed in a recent workshop in Prince George’s County we ran (and it spawned the idea for this post). The answer? Not if you choose the right models! Even some fairly smart people in education seem confused on this point. As long as the models are different, of genuine quality, and in their variety communicating that the goal is original thought, not formula, then there is no reason why students should respond formulaically except out of fear or habit.

If you don’t want 5-paragraph essays, don’t ask for them! If you don’t want to read yet another boring paper, specify via the examples and rubric descriptors that fresh thinking gets higher scores!

Bottom-line: never give kids the excuse that “I didn’t really know what you wanted!” Always purvey models to make goals and supporting rubrics intelligible and to make good performance more likely. Thus, make sure that the variety of models and the rubrics reflect exactly what you are looking for (and what you are warning students to avoid).

In my next post, I’ll provide some helpful tips on how to design, critique, and refine rubrics; how to avoid creativity-killing rubrics; and other tips on how to implement them to optimize student performance. Meanwhile, if you have questions or concerns about rubric design and use, post a reply with your query and I’ll respond to them in the following post.

This article first appeared on Grant’s personal blog

Categories
Teaching

How Much Freedom Should A Teacher Have?

How Much Freedom Should A Teacher Have?

by Grant Wiggins, Ed.D

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This post on ‘transfer’ is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

In a recent blog post, I commented on my dismay at the result about teaching in the just-released annual Kappan poll on education. Most American think teachers are born not made; I disagreed. Today I want to comment on what to me was another troublesome finding: the public’s view (presumably held by many teachers) about a required curriculum, and the broader question – vital for all educators to ponder – as to how much freedom should exist, and where, in teaching.

The poll question: Should education policies require teachers to follow a prescribed curriculum so all students can learn the same content, or should education policies give teachers flexibility to teach in ways they think best?

First, a comment on the question itself: apples and oranges, alas. The question is not well-framed. The survey conflates the ‘what’ with the ‘how’ of teaching. Content can be mandated while teaching can still vary: the question obscures this important distinction as framed (probably just to be sensible to laypersons).

In fact, this distinction probably reflects the norm. i. e. in most districts and even many private schools there is a curriculum framework that obligates all who teach the courses/grades in question, while little is mandated in terms of specific teaching techniques or instructional activities; that is typically left up to teachers. The public’s response – 70% want teachers to have the flexibility – probably reflects a broadly-held view that practitioners (in any field) should be able to exercise judgment about what clients need.

Yet, this result (and the conflated version of the question) begs an important question about just what it means to be a professional. How much freedom to teach a certain way should a professional educator have?

I trust readers agree that professional freedom does not permit one to deviate from the content of the curriculum to ignore specified content goals. Teachers aren’t self-employed entrepreneurs to whom we ‘rent space in the educational mall’ (as one exasperated high school principal once said to me.) There is an organization and it has a Mission and students have a right to a valid and coherent education over time and across teachers and schools if they move. Indeed, the research is clear and common sense about the virtue of such a mandate.

A ‘guaranteed and viable’ curriculum in Marzano’s phrase (summing up all the meta-analysis over the years) is key if you want to ensure that the largest number of students achieve desired outcomes. The success of the Standards movement among rank and file teachers shows that we have come to accept this view. However, not enough frank discussion has been had in schools about the other issue – the so-called right to teach as one sees fit.

Though the 25-year-old rebel in me says “Of course!” and the 61-year-old in me has a negative visceral reaction to scripted teaching programs like Success For All, it is, in fact, a difficult position to maintain objectively. All learning goals imply that some pedagogies are appropriate and others are not – given the stated goals and given how people learn.

You can’t only lecture if you aim to develop critical thinkers.

You can’t merely march through textbook exercises in math if you seek to develop great problem solvers.

You cannot just tell students what history means if you want them to develop the ability to analyze events and documents themselves.

Aligning Your Instructional Practice With Learning Goals

Alas, many teachers and (especially) college professors often rely on instructional methods that are completely incompatible with stated course and program goals. Put briefly, we fail to serve students by using ineffective and unvaried approaches.

So, I think it is reasonable to ask: can’t we tighten this up professionally? Can’t we be more clear and less loosey-goosey about just what is and isn’t negotiable in instruction, given the stated goals and what they logically demand of the use of class time and the learners’ minds?

Furthermore, in few professions are novices allowed to free-lance. No doctor or electrician can blithely invent basic technique or simply decide not to use by-the-book solutions to diagnoses or problems. In fact, in medical education (as I have since learned from discussing these issues with medical school educators), no intern or resident has the authority to administer any intervention without the sign-off of superiors; and few doctors would deviate from prescribed responses to common ailments unless those prescribed approaches failed to work.

Why should teaching be any different?

Why, for example, would we allow a 22-year-old teacher, fresh out of college, to decide on her own (working mostly in isolation, on top of it) what her students need all year as readers in terms of learning activities and assessment? Why wouldn’t we frame core high-quality math units in some detail and only give math teachers the authority to deviate from them if student results and indicators gained via supervision and walk-throughs suggest that they are effective as teachers?

More generally, don’t many of us now subscribe to the view that there really is “best practice” to be learned and used when called for, as in the case of medicine? Then, why would we permit as the default action that you are free to ignore best practice and invent your own? This doesn’t mean that there has to be a rigid inflexible script. Nor does it mean that we take good judgment away from practitioners. On the contrary, as medicine reveals, good judgment best enters when conventional diagnoses and prescriptions fail to work.

The curriculum could thus map out in some detail a few excellent options, based on what wise practitioners know is optimal for causing the desired results. Perhaps more importantly, a professional curricular guide would specify in detail a troubleshooting guide: here are signs that things aren’t going optimally, and here are tried and true alternative solutions for addressing the situation. Jay McTighe and I have long argued that all curricula should have a major section on troubleshooting as part of the document.

Alas, far too many college professors and many high school teachers hold the misguided notion that their “academic freedom” protects them from any mandates about how to teach. This is nonsense: they conflate intellectual freedom with pedagogical obligation. No teacher, not even a professor with tenure, has the right, for example, to design invalid and capricious exams for which students are not prepared. No professor or teacher has the right to use pedagogical approaches that are unethical or totally inappropriate for the goals of the course. But too few college and high school leaders want to go there.

If we want to be a profession it’s time we went there.

Let’s finally have a proper debate, then, in staff and department meetings. Here are 4 questions to start our thinking:

1. Where should there be obligation and where should there be freedom in choice of pedagogy?

2. How much variance should be built into curricula?

3. Where is there a clear set of bona fide “best practices” that must be used and used well if one is to be called a professional educator?

4. What should we do when teachers persist in doing things that are primarily comfortable for them instead of doing what best practice demands?

This article first appeared on Grant’s personal blogfollow Grant on twitter; 6 Questions On Measuring Creativity & Other Abstractions

Categories
Teaching

14 Questions Every Teacher Should Ask Themselves About That Lesson Plan

14 Questions Every Teacher Should Ask Themselves About That Lesson Plan

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts.

Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

by Grant Wiggins

How do you plan?  Here are some of the questions that I think we need better answers to…

  1. Do you plan each day? Weekly? By the unit?
  2. How often do you adjust your future plans based on formative results?
  3. How often is a textbook the source of the plan? What % of the plan is directly from a textbook?
  4. How free are you to plan your own course/units/lessons?
  5. How often is district curriculum and/or course map referenced in your own planning?
  6. How detailed are your plans?
  7. What’s the role of templates and checklists in your planning?
  8. How do you think, ideally, you should plan for optimal preparation and good results?
  9. How much of the planning process, ideally, should be mandated or at least recommended?

Typical plans focus too much on fragmented day-to-day lessons and activities on discrete topics instead of deriving coherent plans ‘backward’ from long-term performance. The result is the beast called “coverage.” More subtly, many plans focus far too much on what the teacher and students will be doing instead of mapping out a plan for causing specific results and changes in ability, attitude, and behavior. A surprising number of plans do not make student engagement a central design consideration. And most plans do not explicitly design in a plan B many plans have no Plan B when Plan A doesn’t work. And even larger number do not plan mindful of predictable misconceptions and rough spots.

The value of a template – with cautions. It was for these reasons and more that Jay McTighe and I wrote Understanding by Design 14 years ago. We clearly struck a chord. The book is in its 2nd edition, over a million copies have been sold and used in countries all over the world, and over 150 schools of education use the book to train teachers in unit writing. Over the years, countless people have thanked us for helping them become more thoughtful and disciplined in their planning.

However, never did Jay and I intend for our template to be a mandatory act of pointless drudgery, a required piece of busywork required by thoughtless supervisors. Never did Jay and I intend people to fixate on filling in boxes. Never did Jay and I advocate using the UbD Unit Template as a lesson planner. Indeed, in our latest books on unit planning we stress this point in an entire module. You can download an excerpt here:  Mod O – on lesson plans (excerpt).

We have hardly treated our own Template as a sacred untouchable icon. We have changed it 4 different times over the past 14 years, and we have provided examples in which various features of the Template were highlighted or left out. In short, we had zero intent of putting teachers in a planning straitjacket. Alas, some mandate-minded supervisors are currently fitting all their teachers for one.

Rather, as with any tool, the template is meant to be a helpful aid, a mental check. The idea of a good checklist is what’s key. Atul Gawande has written extensively on how the “pre-flight” checklist in medicine, modeled on the one used in every airplane cockpit, has saved lives. Here is an article on its power to save lives.

An instructional planning template can save intellectual lives, we think. By having to think of the big ideas; by focusing on transfer as a goal; by worrying about whether goals and assessments align, by being asked to predict misconceptions and rough spots in the learning, the Template keeps key design questions front and center that tend to get lost in typical planning, where teachers too easily think about content to be covered.

Years ago, in working with college professors as part of Lee Shulman’s Scholarship of Teaching program, a History Professor from Notre Dame said, “I can’t use a template. It’s so, so, so – schoolish!”

I replied, “Do you like the planning questions in the boxes?”

He said he did.

“Then, ignore the template and consider the questions,” I said.

“Oh,” he said, “I can do that.”

Precisely.

Planning questions. Here are the current UbD template elements framed as questions, for idea-generation and double-checking one’s draft plan.

14 Questions Every Teacher Should Ask Themselves About That Lesson Plan

  1. Bottom line, what should learners be able to do with the content?
  2. What content standards and program- or mission-related goal(s) will this unit address?
  3. What thought-provoking questions will foster inquiry, meaning-making, and transfer?
  4. What specifically do you want students to understand? What inferences should they make? What misconceptions are predictable and will need overcoming?
  5. What facts and basic concepts should students know and be able to recall and use long-term?
  6. What discrete skills and processes should they be able to use, with good judgment and on their own?
  7. What criteria will be used in each assessment to evaluate attainment of the desired results?
  8. What assessments will provide valid evidence of the goals?
  9. What other evidence will you collect to determine whether goals were achieved?
  10. How will you pre-assess and formatively assess? How will you adjust, if needed (as suggested by feedback)?
  11. Does the learning plan reflect principles of learning and best practices?
  12. How will you fully engage everyone and hold their interest throughout the unit?
  13. How must the plan be tweaked, in light of recent results (and based on ongoing student needs and interests)?
  14. Is there tight alignment across goals, assessments, and learning?

Please let us know how you plan. So, please let us know how you plan, in as much detail as you can provide, in the Comments Section. Happy Summer Planning!

Links on research on planning:

Links to templates for lessons and units:

14 Questions Every Teacher Should Ask Themselves About That Lesson Plan

Categories
Learning

The Point Of School Isn’t To Get Good At School

The Point Of School Isn’t To Get Good At School

by Grant Wiggins

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This post on ‘transfer’ is one of those posts.

Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed. 

Arguably transfer is the aim of any education.

Given that there is too much for anyone to learn; given that unpredictability is inevitable; given that being flexible and adaptive with one’s repertoire is key to any future success, it stands to reason that we should focus our ‘backward-design’ efforts on the goal of transfer, regardless of what and who we teach (and in spite of pressures to merely ‘cover content’ – which ironically inhibits transfer and worsens test scores, as I discuss below and in the next post).

The point of school is not to get good at school but to effectively parlay what we learned in school in other learning and in life.

This notion is now front and center in the latest Understanding by Design (UbD) book, Creating High-Quality Units. The new Template highlights transfer goals since “understanding” surely implies, among other things “effective use of content.” And we have worked hard to help readers and users of UbD understand that the TMA troika is their complex obligation: transfer of learning, meaning-making, and content acquisition.

Learning stuff is not the goal, it’s the means.

Furthermore, if you ask people to identify their long-term goals for the year or their career, they almost always identify transfer goals: read widely and deeply, independently; relate current affairs to history and become involved civically; solve all kinds of non-routine problems in and beyond math, etc. Great!

But… few teachers plan, teach, and assesses as if this were the case. Most teachers’ long-term goals are not reflected in the sum total of their assignments and assessments – and that’s why UbD remains needed. The overwhelming reality, in even the best schools, is that your task as a student is by and large to learn stuff and be tested on whether you learned it.

In this post, I want to go back to basics and remind readers of what transfer is and isn’t as a goal. In my next post I want to look at various released test items that plainly reveal that the most challenging test items demand transfer, not recall. And in my third post I will discuss a few key impediments to effectively teaching and assessing for transfer that we must remove, how we might begin to do so, and share some tools and tips for how to achieve better results.

Definition of Transfer

Let’s begin with a simple overview of transfer from the first paragraph of the most helpful summary on the subject: Chapter 3 on ‘Learning and Transfer’ from the book How People Learn from the National Academy of Sciences (available for free here). Here is how transfer is defined and justified as a goal:

[Transfer is] the ability to extend what has been learned in one context to new contexts. Educators hope that students will transfer learning from one problem to another within a course, from one year in school to another, between school and home, and from school to workplace. Assumptions about transfer accompany the belief that it is better to broadly “educate” people than simply “train” them to perform particular tasks.

Note, then, a key term in the definition: context. And what this really means is contexts. You have not really learned something well unless you can extend or apply in a new context (framing of the task, audience, purpose, setting, etc.) what you learned in one context. You cannot just give me back what I taught you in a task that is framed just like the teaching tasks and the way I taught it and you practiced it. In the famous phrase in math, it can’t just be a ‘plug and chug’ prompt. There is a further implication in the definition that needs to be explicit: I can only be said to have transferred my learning if i did it autonomously, without much teacher reminders and guidance.

I often use the example of soccer in workshops to illustrate the point. As a coach, I often created drills for helping players learn to ‘create space’ on offense. But soccer is not the sum of the drills: can you now – on your own, in a sport with no scripts – apply those drills in the context of a fluid and novel game situation? Can you now ‘see’ when to use which of the skills we practiced – without my telling you what to do at every turn? That’s my aim as a coach and yours as a player.

John Wooden famously and paradoxically said that his aim as a coach was to be surprised by what his players did in a game. A player who has been so well educated and challenged can innovate, and often must, to win. The same thing is arguably true in all academic subjects.

The definition of transfer as the ability to handle novelty is consistent with what Bloom said about application in the Taxonomy:

Applying of appropriate abstraction without having to be prompted as to which abstraction is correct or without having to be shown how to use it in that situation.”[1]

“If the situations…are to involve application as we are defining it here, then they must either be situations new to the student or situations containing new elements as compared to the situation in which the abstraction was learned….. Ideally we are seeking a problem which will test the extent to which an individual has learned to apply the abstraction in a practical way….  Problems which themselves contain clues as to how they should be solved would not test application.”[2]

Many teachers just expect transfer to happen if content is well-taught. No research supports this view.

Students who have not been taught for transfer overwhelmingly respond as follows to a ‘novel’ but do-able challenge: We didn’t cover this; I don’t know what to do. In David Perkins’ famous example, it is like the Physics student in college who complained that, while all the problems studied in class involved shooting cannons into the air, the exam question that involved dropping cannon balls down shafts was unfair because “we never studied any hole problems.”

That achieving transfer is far more difficult than we grasp or care to acknowledge is also clear from soccer. True story about a former player of mine, in a game. When I yelled out to her to apply what we had been learning all week she yelled back in the game: “But the other team isn’t lining up the way we did the drills!!”

Indeed.

Yet this humorous anecdote has a serious consequence: even well-taught students don’t transfer their learning very well. Many students do poorly on high-stakes tests because they don’t see that an unfamiliar-looking test question is related to something they learned.

In effect, whether in soccer, mathematics or US history, the learners have to be able to see on their own in this ‘new’ task how past learning applies – without the past learning being explicitly prompted.  And, in more challenging transfer tasks, they are thus going to need some creative insight as well as flexibility in adapting prior learning to a very unfamiliar-looking unscaffolded task.

Confronting Students With ‘Novel’ Tasks 

Note, then, that the key idea in aiming for and (especially) assessing for transfer is that the student has to successfully confront a “novel” challenge before we should conclude that they really got it. What “novel” means here is: an unfamiliar-looking task (as framed) that none the less should be doable by the student – if they really learned the related content with understanding.

Here’s a simple example: if I teach the 5-paragraph essay, I should be sure to ‘test’ student understanding of the genre by asking them to read and write a 4 or 7 paragraph essay. But as the now-famous item from the MCAS English test in Massachusetts a few years ago revealed, when students were asked to classify a 17-paragraph piece of writing, only 31% correctly said ‘essay’ from the choices – and reported to newspaper reporters that it “couldn’t be an essay because it didn’t have 5 paragraphs.

A vital lesson flows from this issue of novelty. Just because a teacher-designed challenge is hands-on and educationally worthy doesn’t mean that it requires much independent application of prior learning. If the task is familiar and the work is scaffolded, little transfer of learning is required.

So, the typical hands-on project – done for all the right reasons – does not assess for transfer if the student:

1) gets help all along the way in completing the project

2) the work is highly contextualized

3) little demand is typically made whereby the student must draw general and transferable lessons from the doing of this and other projects.

In fact, since such projects are usually so teacher-scaffolded and highly specific they may well inhibit later transfer of the same abilities and ideas in question! I grew flowers, but we didn’t ‘cover’ herbs, so…

Here’s the other irony, addressed in another post: transfer is precisely what a challenging multiple-choice test question demands of the learner. Learners have to handle questions that look different from the ones they studied – with no hints or ability to question the teacher. The most difficult tests questions involve transferable ideas and processes, not obscure facts.

Most ‘test prep’ is thus an utter failure because it conflates the format with the rigor: teachers wrongly focus on practicing the test format (using low-level and familiar items) instead of practicing the test goal where the harder questions require transfer of learning.

In the next installment, I want to analyze released test items that make very concrete and clear how educators often misunderstand tests and thus proper preparation for them; and unintentionally undercut transfer, with unfortunate outcomes.

Image attribution flickr users glynlowephotoworks, travishornung, and tulanepublicrelations; This post was oroginally published on Grant’s blog; The Point Of School Isn’t To Get Good At School: Transfer As The Goal Of Education

Categories
Teaching

Coverage Teaching Is A Kind Of Blindness

Coverage Teaching Is A Kind Of Blindness

by Grant Wiggins

Ed note: We’re continuing to go back and share some of Grant’s best and most useful posts. This one on the implementation of Essential Questions is useful even if you’re “doing” UbD due to its emphasis on inquiry.

The Essential Question as Anchor

Let me offer a concrete example from when I taught English of how to get students to draw inferences and come to realizations without “wasting” time even though it takes more time than just “teaching” the readings. Here are the texts: The Hans C. Andersen story “The Emperor’s New Clothes,” Plato’s “Allegory of the Cave,” and Oedipus The King by Sophocles.

Ancient texts and fairy tales! The design challenge is thus clear: we need to make them both understood and meaningful.  In other words, we have to treat the texts as having something to say and something to ponder to teenagers now. So, a good essential question is key. Here is the one I successfully used for years: Who sees and who is blind? The question is asked and re-asked for each text and across texts, and students also know from the start of the unit that it is the final essay prompt. Already, then, the work is somewhat more meaningful and understanding-focused. I have made clear that the texts serve the question, and on its face the question is interesting (or soon will be).

The first activity involves an in-class reading from Winnie the Pooh where Winnie and Piglet hunt (or think they are hunting) Woozles. As the footprints grow in number they conclude that more and more Woozles are in the area – only to learn (from Christopher Robin in the tree above them) that they have been walking in circles. A lively discussion ensues about illusions and delusions, and especially Piglet’s fear and running away. “What would have happened if Christopher Robin had not shouted to them below from his perch in the tree?” I ask at some point. Further lively discussion ensures about how you escape or don’t escape “blindness”: do you need someone else to escape your own blindness?

I then ask students in small groups to recall times in history where we were “certain” that something was the case, only to have it turn out that we were deluded – and to offer some hypotheses as to why this perpetually happens. Then, we generalize from all the group answers. The first writing assignment for homework requires students to reflect on the question in terms of their own experience: write about a time when you were completely “blind” to some truth even though many others saw the truth and were unsuccessful in getting you to see it. (A later assignment asks them to switch perspective: when did you see something as clear as day that another person remained blind to?)

“Falling Behind”

I trust you’ll agree with me that the question is now likely owned and appreciated by students. We “lost” 2 days of “teaching” the core texts but gained immeasurable motivation and meaning.

The key pedagogical challenge is now to get students to see that the important texts we will be reading are worth reading even though they are challenging. Namely, my job is to get them to see the texts and the struggle to understand them as worth the effort as we pursue a question that is gaining their interest. For a while we will lose sight of the question (be blind to it!) as we work to follow the text. The best teaching, however, keeps the question in view often enough that the slog seems worth it. Indeed, a few students grasp that their unwillingness to read the texts and see possible merit in them is a form of blindness: this is a key moment when it happens.

One of the specific hurdles in the unit is to see how well I can engineer students to realize, for example, that the “blind” prophet in Oedipus Rex “sees” better than the wily and smart Oedipus. Many times in my classes, all I need to do is to remind students of the essential question when we get to the part where Teiresias angrily leaves unwilling to share what he knows about the prophecy, and students excitedly link it to the question.

Even better, a number of students over the years spontaneously applied the question to Teiresias: why, if he is such a great prophet, did he get so angry since he knew the curse and its implications? What does the scene say about blindness and anger? When students spontaneously transfer meanings, you can be sure you are on the right track; when they fail to bring up past discussions, experiences, and readings, you can be sure that the work is not yet meaningful.)

Sequence Matters

Sequence matters, too. In terms of Oedipus, having already read The Emperor’s New Clothes, even students struggling to read every word of the text easily start to make generalizations about blindness in and because of important people, linking it to stories in our own time with at most minor prompting from me. All I need do is point to some key passages in the text, remind them of earlier points they made about this or other texts, and highlight links to other illustrative current and past examples of the issue.

I could probably “cover” the play in four days of assigned readings and lecture-discussion. In my way of doing it, it takes 2 weeks. But by the end, students have achieved not only great insights mostly on their own but come to appreciate the insights to be gained from an ancient text. When it really works, struggling readers want to become better readers because they start to see that texts have buried treasure in them.

Another specific challenge in the unit is to help students realize that the Allegory of the Cave not only applies to the key characters in all the readings but to themselves as learners. Indeed, when Glaucon first reacts to the depiction of the people in the cave, by saying “What a strange place, and what strange prisoners!” Socrates quickly replies “Like ourselves.” (Indeed, the allegory is introduced as a parable of “our” education and ignorance.)  Yet few students catch this reference to us: like Glaucon, they are initially “blind” to the parallel, merely fascinated with the imagery.

We will want to carefully engineer in them the realization that the Cave speaks directly to their own education. Indeed, a number of students quickly pick up the idea that grades and commencement prizes are just like the phony awards in the Cave. Sometimes all it takes is a pointed question taking them back to the text: “Guys, when, precisely, does the guy stop resisting being dragged out of the cave?” Or: “Why is Socrates telling this story anyway?”

juxMeaning is Foreground, Content is Background

I could use the phrase “socratic seminar” to describe what I am doing here but that is to wrongly narrow the issue to technique. What matters is the aim. It doesn’t matter how much I facilitate the conversation or whether or not the discussion happens with me in a whole group or without me in small groups – or even whether we use books, movies, or experiences. We don’t describe science labs or the Case Method as socratic seminar but good labs and case studies fulfill the same function.

“By design” in all cases students are led to realize, test, and verify certain inferences and their implications. What matters is that students – and their teachers! – grasp that the aim of such work and methods is meaning, not acquisition, and that the work has been designed accordingly: meaning is foreground, content is background.

Doubters should refer to the research on learning and the best college teaching, summarized by John Hattie and Ken Bain. The highest-level achievement is caused by such teaching, period. This is common sense for any of us who have had a really fine education.

We succeed as coaches of understanding if we have designed the learning – the tasks and our methods – to help learners make meaning as much as possible on their own. We don’t say we have no time for discussion, labs or cases, let me just teach you the facts – if we want engaged minds and transferable understanding. That would be as silly as saying there is not enough time for you to practice driving a car before your license test, let me instead teach you everything I know about driving.

“Coverage” is ultimately an egocentric delusion, in other words; it is a form of teacher blindness! It presumes that just because we teach it you will get it and appreciate it. All we need do to expose the blindness is look at test scores, results on assessments of misconception, and responses on student surveys to realize that we are without vision more than we realize.

Who sees and who is blind?

This post first appeared on Grant’s personal blogfollow Grant on twitter; You Can’t Teach Understanding

Categories
Teaching

You Have To Create Understanding By Design

You Have To Create Understanding By Design

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

by Grant Wiggins, Ed.D, Authentic Education

A cardinal principle in aiming at understanding is that understanding requires different pedagogy than acquisition of knowledge and skill. Knowledge and skills are best developed by direct instruction and reinforcement if we want recall and fluency.

Understanding, however, involves something beyond mere acquisition for later straightforward use. To understand, students must do something with, adapt, and sometimes question what they (think they) know.

They have to think and rethink.

They must be required to draw inferences and come to realizations, try performing with that understanding, and draw further inferences from what works, what doesn’t, when, and why. The student doesn’t have to merely “know” F=ma or that the Federalists predicted the health-care debate, they have to “realize” the point of the knowledge, its power, and its limits in order to transfer it flexibly and fluently in the future.

Thus, to achieve understanding as an educator, you have to help students “by design” come to realizations that they own and appreciate as insightful. If you don’t, if you just “teach” the understandings you aim to have them possess, you will fail – no matter how “good” the teaching. Indeed, this is the key to grasping the meaning of research on student misconception: misunderstandings persist in the face of pedagogy that doesn’t elicit and challenge student meanings and their meaning-making process. Teachers thus need to be crystal-clear in their own mind which of their goals involve knowledge and which involve understanding and treat each goal accordingly.

The temptation to teach understandings is great. It is arguably the Achilles Heel of all teachers. Indeed, we are prone to “teach” too much as our title – Teacher, Professor – indicates. We are convinced that we can effectively teach this or that understanding so that they grasp and appreciate it. Furthermore, we are in constant fear of losing time and not getting through all the content to be covered. So, we think direct teaching of understanding is both efficient and effective.

Alas, it almost never works in the end. If you doubt me, just switch gears and think of parenting. How often have you had a child “learn” the understanding you “taught” the first or even the fourth time about, say, peer pressure, time management or wise use of allowance money? I didn’t think so. Indeed, a little reflection on the humility and patience required by parenting would be a useful antidote to the naiveté of thinking you can “cover” all that matters in your courses and cause lasting and useful understanding.

No, there is no way around it. If you want students to have meaningful learning experiences that culminate in transferable insight and know-how, then you have to lose time to gain it. You have to slow down the teaching to speed up the learning.

You have to engineer understanding by design.

In part 2 tomorrow, we’ll take a look at why this is so important.

You Have To Create Understanding By Design

Categories
Teaching

Critically Examining What You Teach

by Grant Wiggins, Ph.D

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

In my 100th blog post I complained about the course called ‘algebra’. Some commenters misunderstood the complaint. Though I said a few times in the article that my critique was not about the content called algebra but the aimless march through stuff that makes up almost every algebra course in existence, some thought I was bashing the value of the content. Not so. Another commenter said: you might have ranted, then, about many history courses! Indeed I could have–and have done so multiple times in my career.

The issue, then, is not ‘algebra’ or ‘history’ but what we mean by ‘course of study’. I am claiming that to be a valid course, there has to be more than just a list of valued stuff that we cover–even if that list seems valuable to me, the teacher. Rather, a course must seem coherent and meaningful from the learner’s perspective. There must be a narrative, if you will; there must be a throughline; there must be engaging and stimulating inquiries and performances that provide direction, priorities, and incentives.

Notice that I haven’t merely defined a course. What I have just done is identify some criteria by which any so-called ‘course’ can be designed and critiqued. And such criteria are vital: I know from 35 years at this work that very few teachers ever self-assess a course as a course against explicit criteria–with unfortunate consequences. They may tweak lessons and even units but they rarely dispassionately critique the design of the entire course against criteria such as mine; or receive feedback against criteria about their design.

Textbooks Are Tools, Not Courses or Content Areas

Next time I will say a bit more about my criteria, but we can’t ignore the other lurking issue in this discussion: ‘coverage’, i.e. teachers marching through the pages in a textbook.  I wish to claim that defining a course as a tour through the textbook, page by page, is simply not a course by any valid set of criteria. A textbook is merely a collection of topics, with exercises and text under each topic.

The textbook does not know your personal or school priorities; the textbook does not know your students; the textbook doesn’t identify any priorities or through lines that unite all the chapters, etc. So, a march through a book is a non-design. It would be like learning English through a page by page tour of the dictionary and grammar book; it would be like learning history by reading through the encyclopedia page by page.

It doesn’t matter how good the textbook is. My critique is not a critique of textbooks. (I have worked on over a half-dozen for Pearson, to infuse UbD). My critique is the use of books. A text–be it an algebra textbook or Catcher in the Rye–is a resource in support of clear and learning-focused goals. Goals cannot be supplied by a text, they are supplied by purposeful teachers.

7 Prompts That Every Teacher Of A Well-Designed Course Should Be Able To Answer

Here are some simple prompts that a teacher who has really thought through the course as a course should be able to answer:

  1. By the end of the year students should be able to…. and grasp that…
  2. The course builds toward…
  3. The recurring big ideas about which we will go into depth are…
  4. The following chapters and sequence support my goal of…
  5. Given my long-term priority goals, the assessments need to determine if students can…
  6. Given my goals, the following activities need to build insight and incentive…
  7. If I have been successful, students will be able to transfer their learning to… and avoid such common misconceptions and habits as…

So, even before spelling out the meaning of and rationale for my course criteria, you should be able to realize from these prompts why almost all algebra courses are complete failures as courses–i.e. purposefully designed learning in support of clear intellectual goals. No, almost every algebra course (and, yes, history and science course) is a mere march through a textbook, page by page. Rarely do explicit overarching goals and priorities inform the sequence, the activities, assessments, and choice of topics.

Most importantly, the assessments almost never require students to synthesize learning across many chapters and transfer their understandings and skills to priority performance tasks. And it is therefore no accident that students uniformly find high school math to be their most boring and difficult course, as our student surveys show.

In my follow-up post I will say more about the criteria mentioned above as well as walk the talk: I’ll share some design work that my colleagues and I have done over a 10 year period to build better high school math courses. Here’s a hint: if we want understanding instead of mere dutiful learning, we must begin in a very different place than almost every math course I have ever seen.

We have to begin with giving the learners intellectual reasons and incentives for taking such a course. And, thus, we have to justify both the content and the overall direction of the course.

math-books-rulerUPDATE: Resources for Improvement

A number of math teachers have complained either directly or indirectly recently that I have offered criticisms but no solutions to the problem of poorly-designed math courses, especially at the secondary level. For example:

“In my opinion the writer suggests that textbooks are merely a collection of topics with examples of exercises under each and that teachers merely race through a textbook to get to the end. In a sense I agree with this but my problem/concern is that he offers no alternative/answer to what we should be doing instead…. It seems there are so many people out there saying that this is not what we should be teaching our students and that us Math teachers are in fact wasting students time with our outdated teaching methods. My question is then what should we be teaching them? What am I missing? He offers no answer to that question.

“What I find lacking in your rants are specifics. What types of “broad questions” would you suggest to stimulate the interest of hormone driven 14-year old boys? or incredibly self-conscious 14-year old girls many of whom lack basic computational skills, the ability to read critically or who are afraid to take chances or to explore into areas in which they are not familiar, yet who are required to sit in my Algebra I class?

Leaving aside the fact that I have indeed offered numerous resources under the Understanding by Design name (including many math units such as this one: Algebra Unit – before and after) over many years, let me offer a short list of print and web sources for problems, assessments, and pedagogical advice on teaching mathematics more meaningfully that every secondary math teacher ought to have in their library (or at least know about). Math teachers and supervisors: please post others and I’ll add them to this list.

Websites

The first go-to resource is Dan Meyer’s videoblog and his open-source collection of problems. The second go-to site is the archive of Car Talk Puzzlers. Here’s my favorite, and make sure to read all the follow-up posts and listen to the next week’s radio show). Here’s my next favorite, and the great kids from E Tipp MS had fun with it, as I blogged here.

Another helpful source is from the United Kingdom (and has served as a partner to Common Core developers) – the Shell Center.

I contributed to a big volume for MAA on Quantitative Literacy (my article begins on p. 121) and you can find many examples not only in my article but in those of others in the volume.

NCTM publishes resources under the Illuminations banner. Here are lessons in algebra.

Mathalicious has some great resources for real-world lessons. So does PBSBuck Institute and Edutopia have long been known for their materials on problem-based learning.

Books

To build courses around worthy performance tasks, the series entitled Balanced Assessment, edited by Judah Schwartz, is excellent. (You can find free resources from it here.) Good tasks, good rubrics, and samples of student work. There are books for middle school, high school, and advanced high school.

The 20 year-old book from NCTM entitled Teaching and Assessing Problem Solving is probably the best of it s kind, a great mix of theory and practice, filled with helpful examples. A newer NCTM book, Teaching Mathematics Through Problem Solving, is equally helpful.

An edited volume entitled Real-World Problems for Secondary School MathematicsStudents has lots of great examples from different countries.

One of the better textbooks in math is by Harold Jacobs called Geometry: Seeing, Doing, Understanding.

Beyond Formulas in Mathematics and Teaching is a bit text heavy but provides a solid perspective on such an aim. For a more general text on the meaning of mathematics, highly readable and usable with HS students, nothing beats Morris Kline’s old bookMathematics in Western Culture.

And as I have noted numerous times in this blog, arguably the best course ever designed, from the 1930’s, was Harold Fawcett’s course later written up as an NCTM Yearbook, and republished 20 years ago. And surely the most seminal and vital book in a math teacher’s library is Polya’s classic How To Solve It. Here is a great old video of Polya at work. Stick with it: there is a dramatic conclusion to the inquiry.

A blunt postscript: All of these resources are not new. I find it a bit depressing that so many math teachers such as the ones I quoted above are seemingly unaware of the materials that are available to ensure better engagement and outcomes in mathematics. BTW, it is ONLY math teachers who routinely make these complaints such as the ones up above in high numbers that they lack resources to develop better courses, instruction, and assessment. At a certain point, I simply must say: isn’t it your professional obligation to know about these resources rather than vent at me for not providing more resources?

This article first appeared on Grant’s personal blog; you can follow Grant on twitter; Critically Examining What You Teach; image attribution flickr user flickeringbrad

Categories
Teaching

The Right Way To Implement Essential Questions

by Grant Wiggins, Ph.D, AuthenticEducation.org

Ed note: We’re continuing to go back and share some of Grant’s best and most useful posts. This one on the implementation of Essential Questions is useful even if you’re ‘doing’ UbD due to its emphasis on inquiry.

We had a delightful visit to The School of the Future in New York City the other day. Lots of engaged kids, a great blend of instruction and constructivist work, and an obvious intellectual culture. And as the picture illustrates, everywhere we went we also saw helpful visual reminders of the big ideas and essential questions framing the work we were watching: School of the Future staff have long been users of UbD tools and ideas.

But far too often over the years I have seen plenty of good stuff posted like this – but no deep embedding of the Essential Question (EQ) into the unit design and lessons that make it up.  Merely posting the EQs and occasionally reminding kids of it is pointless: the aim is to use the question to frame specific activities, to provide perspective and focus, to prioritize the course, and to signal to students that, eventually, THEY must – on their own – pose this and other key questions. (Note: I am not criticizing what we saw and heard at SoF, rather using this teachable moment to raise an issue that needs addressing by almost all faculty using our work.)

Let’s start with a simple example from my own teaching. The EQ for the unit: Who sees and who is blind? The readings: The Emperor’s New Clothes, Plato’s Allegory of the Cave, Oedipus the King. Students are instructed to take notes around the EQ and other questions that arise related to it (e.g. Why do people deceive themselves?). We alternate between small-group discussions of the previous night’s reading, Socratic Seminar on the readings with the whole class, some mini-lessons on reading and note-taking skills, and a teacher-led de-briefing of what worked, what didn’t in Seminar as well as a discussion of confusing points in the texts. The final assessment? An essay on the EQ.

At every turn, in other words, the EQ looms large in the unit. Students are not only encouraged to keep pondering it across each reading, but they take notes on the question and routinely remind one another that this question is the focus.

This is far different than what we typically see in walk-throughs where EQs are being used. The only person that keeps referring to the EQ is the teacher; the main use of the question is by teachers in which they point out “answers” to the EQ. Rarely is the EQ central to the assessment – in part, because all too often the EQ is too convergent and has a right answer that the teacher wants learned. Almost never does there appear to be a plan whereby the question goes from the teacher’s control to the students’ control.

All well and good in English, Grant; what about math?

Same thing. Georg Polya 50 years ago provided a fantastic set of EQs at the heart of genuine problem solving in math:

  • What is the unknown?
  • What are the data?
  • What is the condition?
  • Do you know a related problem?
  • Have you seen the problem before?
  • Could you restate the problem?
  • Can you check the result?
  • Can you derive the answer differently?
  • Can you use the result, or the method, for some other problem?

Here is how Polya described their use:

“There are two aims which the teacher may have in view when addressing to his students a question or a suggestion…: First, to help the student to solve the problem at hand. Second, to develop the student’s ability so that he may solve future problems by himself…If the same question is repeatedly helpful, the student will scarcely fail to notice it and he will be induced to ask the question by himself in a similar situation. Asking the question repeatedly, he may succeed once in eliciting the right idea. By such a success, he discovers the right way of using the question, and then he has really assimilated it… [Appropriate questions and suggestions] have two common characteristics, common sense and generality. As they proceed from plain common sense they very often come naturally; they could have occurred to the student himself. As they are general, they help unobtrusively; they just indicate a general direction and leave plenty for the student to do” [How to Solve It, pp. 3-4]

Thus, the whole point of the questions is similarly for them to become the students’questions in a “gradual release” model, in the face of intellectual challenges, just as in my class. So, subject matter has nothing to do with it – despite the fact that many math teachers seem positively stubborn on this myopia of theirs. Everything depends upon whether the units have been set up to focus on genuine inquiry as opposed to pseudo-questions or pseudo-problems that have a simple approach and a preferred correct answer. (But then they aren’t real problems, are they?)

Let me remind you of a basic point, then: in UbD we list Essential Questions in STAGE 1. In other words, the EQ is a goal. i.e. the QUESTIONING is the goal: the box does not say Essential Answers to Nice Questions.

All of this will be addressed at some length in our new book on Essential Questions, due out in early April. But I thought an excerpt from the book on a simple design process would be both a good “trailer” for the book and a useful closure to this post:

We can describe what has to happen in any successful use of EQs in terms of a four-phase process:

A Four-Phase Process for Implementing Essential Questions

Phase: Introduce a question designed to cause inquiry.

Goal: Ensure that the EQ is thought-provoking, relevant to both students and the current unit/course content, and explorable via a text/research project/lab/problem/issue/simulation in which the question comes to life.

Phase: Elicit varied responses and question those responses.

Goal: Use questioning techniques and protocols as necessary to elicit the widest possible array of different plausible, yet imperfect answers to the question. Also, probe the original question in light of the different takes on it that are implied in the varied student answers and due to inherent ambiguity in the words of the question.

Phase: Introduce and explore new perspective(s)

Goal: Bring new text/data/phenomena to the inquiry, designed to deliberately extend inquiry and/or call into question tentative conclusions reached thus far. Elicit and compare new answers to previous answers, looking for possible connections and inconsistencies to probe.

Phase: Reach tentative closure.

Goal: Ask students to generalize their findings, new insights, and remaining (and/or newly raised) questions about both content and process.

Note that this process is not restricted to a single unit. We can use this framework to string different units together so that Phase 3 could be the start of a new unit in which a novel perspective is introduced and explored using the same question(s).

Here is a simple example from science using the question: What is science? In many middle school and high school science courses, teachers often devote an initial unit or lesson to the question. Typically, though, after an early reading and discussion, the question is dropped, never to return that year as attention turns to acquiring specific knowledge and skill. And no genuine inquiry into the question ever really occurs, ironically enough.  (This pattern is aided and abetted by most textbooks.)

Let’s see how the framework helps us more clearly see an alternative approach in which the Essential Question becomes more prominent throughout the course.

Example Of The Four-Phase Process for Implementing Essential Questions

Phase: Introduce a question designed to cause inquiry.

Goal: What is science? How does it relate to or differ from common sense and religious views on empirical issues?

Phase: Elicit varied responses and question those responses.

Goal: Students read 3 different short readings that address the EQ, in which there is great disagreement about what science is, how it works, and how much stock we should put in its answers.

Phase: Introduce and explore new perspective(s)

Goal: Students are asked to do 2 different experiments in which methods vary and margin of error is salient. They also read about a few controversies and false discoveries in the history of science,: read Karl Popper on how science is inherently testable and tentative – “falsifiable” – where by contrast ideology can always ‘explain’ anything; read Feynmann on how most people misunderstand what science is; read Hume on why we should be inherently skeptical about science as truth.

Phase: Reach tentative closure.

Goal: Ask students to generalize their findings, new insights, and remaining (or newly raised) questions about the nature of science.

As the example suggests, proper treatment of the question would demand not only that the question be constantly revisited throughout the year – “Based on the previous two experiments and our lively disagreements about the findings in the Global Warming research, what would you now say science is?” – but that the course must also therefore include a look at pseudo-science and the danger of confirmation bias, as well as consideration of the very counter-intuitive aspects of modern scientific thinking (which often give rise to common and persistent student misconceptions in the sciences and about science itself).

Here is an example from elementary social studies:

A Four-Phase Process for Implementing Essential Questions

(example – Elementary Social Studies)

Phase: Introduce a question designed to cause inquiry.

Goal: After a cursory lesson on the typical names and characteristics of US regions, ask: Could we carve up the map differently? What kinds of regions might be just as useful for us to define? What “regions” do we live in? How many regions do we live in?  

Phase: Elicit varied responses and question those responses.

Goal:  To what extent is defining an area as a “region” useful?  Compare and contrast the benefits and weaknesses of various regional maps and categories for school, town, and state; and alternate regions of the US, based on cultural aspects (e.g. regional sports affiliations).

Phase: Introduce and explore new perspective(s)

Goal: Pursue the idea of regions based on cultural aspects (food, leisure, jobs) and thus the extent to which talking about regions like the “south” or “northwest” may be unhelpful because it can cause us to stereotype and overlook uniqueness or diversity in every region. Related questions can then be explored: To what extent do we usefully define ourselves in “regional” terms, e.g. southerner, coastal, West Tennessee, Upstate NY, Northern California, etc. as opposed to by state or nation? When is it useful to define region by physical characteristics and when is it useful to define it by sociological characteristics? etc.

Phase: Reach tentative closure.

Goal: Ask students to generalize their findings, new insights, and remaining (or are newly raised) questions about regions and the usefulness of the idea.

In other words, inquiry by design, not mere teacher rhetorical questioning, makes an EQ come to life and go into depth. The texts, prompts, rules of engagement, and final assessments provide the key elements needed for the design to succeed, in light of the just-noted criteria: an intriguing and key question, inherent ambiguity, clearly different points of view, and shades of gray that will require careful questioning and discerning observation and research.

Categories
Teaching

7 Key Characteristics Of Better Learning Feedback

characteristics-of-better-learning-feedback-fi7 Key Characteristics Of Better Learning Feedback

by Grant WigginsAuthentic Education

On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. 

Whether or not the feedback is just “there” to be grasped or offered by another person, all the examples highlight seven key characteristics of helpful feedback.

Helpful feedback is –

  1. Goal-referenced
  2. Transparent
  3. Actionable
  4. User-friendly
  5. Timely
  6. Ongoing
  7. Consistent

Though some of these traits have been noted by various researchers [for example, Marzano, Pickering & Pollock (2001) identify some of #3, #5, #1 and #4 in describing feedback as corrective, timely, specific to a criterion], it is only when we clearly distinguish the two meanings of “corrective” (i.e. feedback vs. advice) and use all seven that we get the most robust improvements and sort out Hattie’s puzzle as to why some “feedback” works and other “feedback” doesn’t. Let’s look at each criterion in turn.

1. Goal-referenced. There is only feedback if a person has a goal, takes actions to achieve the goal, and gets goal-related information fed back. To talk of feedback, in other words, is to refer to some notable consequence of one’s actions, in light of an intent. I told a joke – why? To make people laugh; I wrote the story – why? To paint vivid pictures and capture revealing feelings and dialogue for the reader to feel and see; I went up to bat – why? To get a hit. Any and all feedback refers to a purpose I am presumed to have. (Alas, too often, I am not clear on or attentive to my own goals – and I accordingly often get feedback on that disconnect.)

Given a desired outcome, feedback is what tells me if I should continue on or change course. If some joke or aspect of the writing isn’t working – a revealing, non-judgmental phrase – I need to know.  To that end, a former colleague of mine when I was a young teacher asked students every Friday to fill out a big index card with “what worked for you and didn’t work for you this week.” I once heard an exasperated NFL coach say in a post-game interview on the radio: “What do you think we do out here, wind a playbook up and pray all season? Coaching is about quick and effective adjustment in light of results!”

Note that goals (and the criteria for them) are often implicit in everyday situations. I don’t typically announce when telling the joke that my aim is to make you laugh or that I wrote that short story as an exercise in irony. In adult professional and personal life, alas, goals and criteria for which we are accountable are sometimes unstated or unclear as well – leading to needlessly sub-par performance and confusing feedback. It can be extra challenging for students: many teachers do not routinely make the long-term goals of lessons and activities sufficiently clear. Better student achievement may thus depend not on more “teaching” or feedback only but constant reminders by teachers of the goal against which feedback is given: e.g. “Guys, the point here is to show, not tell in your writing: make the characters come alive in great detail! That’s the key thing we’ll be looking for in peer review and my feedback to you.” (That’s arguably the value of rubrics, but far too many rubrics are too vague to be of help.)

2. Transparent and tangible, value-neutral information about what happened. Therefore, any useful feedback system involves not only a clear goal, but transparent and tangible results related to the goal. Feedback to students (and teachers!) needs to be as concrete and obvious as the laughter or its absence is to the comedian and the hit or miss is to the Little League batter. If your goal as a teacher is to “engage” learners as a teacher, then you must look for the most obvious signs of attention or inattention; if your goal as a student is to figure out the conditions under which plants best grow, then you must look closely at the results of a controlled experiment. We need to know the tangible consequences of our attempts, in the most concrete detail possible – goal-related facts from which we can learn. That’s why samples or models of work are so useful to both students and teachers – more so than the (somewhat abstract) rubrics by themselves.

Even as little pre-school children, we learn from such results and models without adult intervention. That’s how we learned to walk; that’s how we finally learned to hold a spoon effectively; that’s how we learned that certain words magically yield food, drink, or a change of clothes from big people. Thus, the best feedback is so tangible that anyone who has a goal can learn from it. Video games are the purest example of such tangible feedback systems: for every action we take there is a tangible effect. We use that information to either stay on the same course or adjust course. The more information “fed back” to us, the more we can self-regulate, and self-adjust as needed. No “teaching” and no “advice” – just feedback! That’s what the best concrete feedback does: it permits optimal self-regulation in a system with clear goals.

Far too much educational feedback is opaque, alas, as revealed in a true story told to me years ago by a young teacher. A student came up to her at year’s end and said,  “Miss Jones, you kept writing this same word on my English papers all year, and I still don’t know what it means.” “What’s the word?” she asked. “Vag-oo,” he said. (The word was “vague”!). Sad to say, too much teacher feedback is ‘vagoo’ – think of the English teacher code written in margins (AWK, Sent. Frag, etc.) Rarely does the student get information as tangible about how they are currently doing in light of a future goal as they get in video games.  The notable exceptions: art, music, athletics, mock trial – in short, areas outside of core academics!

This transparency of feedback becomes notably paradoxical under a key circumstance: when the information is available to be obtained, but the performers do not obtain it – either because they don’t look for it or because they are too busy performing to see it. We have all seen how new teachers are sometimes so busy concentrating on “teaching” that they fail to notice that few students are attending or learning. Similarly in sports: the tennis player or batter is taking their “eye off the ball” (i.e. pulling their head out instead of keeping your head still as you swing), yet few novice players “see” that they are not really “seeing the ball.” They often protest, in fact, when the feedback is given.  The same thing happens with some domineering students in class discussion: they are so busy “discussing” that they fail to see their unhelpful effects on the discussion and on others who give up trying to participate.

That’s why it is vital, at even the highest levels of performance, to get feedback from coaches (or other able observers) and/or video to help us perceive what we may not perceive as we perform; and by extension, to learn to look for what is difficult but vital to perceive. That’s why I would recommend that all teachers video their own classes at least once per month and do some walk-throughs and learning walks, to more fully appreciate how we sometimes have blind spots about what is and isn’t happening as we teach.

It was a transformative experience for me when I did it 40 years ago (using a big Sony reel-to-reel deck before there were VHS cassettes!). What was clear to me as the teacher of the lesson in real time seemed downright confusing on tape – visible also in some quizzical looks of my students that I had missed in the moment. And, in terms of improving discussion or Socratic Seminar, video can be transformative: when students see snippets of tape of their prior discussions they are fascinated to study it and surprised by how much gets missed in the fast flow of conversation. (Coaches of all sports have done this for decades; why is it still so rare in classrooms?)

3. Actionable information. Thus, feedback is actionable information – data or facts that you can use to improve on your own since you likely missed something in the heat of the moment. No praise, no blame, no value judgment – helpful facts. I hear when they laugh and when they don’t; I adjust my jokes accordingly. I see now that 8 students are off task as I teach, and take action immediately. I see my classmates roll their eyes as I speak – clearly signaling that they are unhappy with something I said or the way I said it. Feedback is that concrete, specific, useful. That is not to say that I know what the feedback means, i.e. why the effect happened or what I should do next (as in the eye rolling), merely that the feedback is clear and concrete. (That’s why great performers aggressively look for and go after the meaning of feedback.)

Thus, “good job!” and “You did that wrong” and “B+” on a paper are not feedback at all. In no case do I know what you saw or what exactly I did or didn’t do to warrant the comments. The responses are without any actionable information. Here is a question we can easily imagine learners asking themselves in response, to see this: Huh? What specifically should I do more of and less of next time, based on this information?  No idea. The students don’t know what was “good” or “wrong” about what they did.

Some readers may object that feedback is not so black and white, i.e. that we may disagree about what is there to be seen and/or that feedback carries with it a value judgment about good and bad performance.  But the language in question is usually not about feedback (what happened) but about an (arguable) inference about what happened.  Arguments are rarely about the results, in other words; they are typically about what the results mean.

For example, a supervisor of a teacher may make an unfortunate but common mistake of stating that “many students were bored” in class. No, that’s a judgment, not a goal-based specific fact. It would have been far more useful and less debated had the supervisor said something like: “I counted inattentive behaviors lasting more than 5-10 seconds in 12 of the 25 students once the lecture was well underway. The behaviors included 2 students texting under desks, 2 passing notes, and 7-8 students at any one time making eye contact with other students, etc.

However, when you moved to the small-group exercise using the ‘mystery text’, I saw such off-task behavior in only 1 student.” These are goal-related factual statements, not judgments. Again, it doesn’t mean that the supervisor is correct in the facts and it certainly doesn’t mean they are anti-lecture; it only means that the supervisor tries to stick to facts and not jump to glib inferences – what is working and what isn’t.

Such care in offering neutral goal-related facts is the whole point of the clinical supervision of teaching and of good coaching more generally. Effective supervisors and coaches work hard to carefully observe and comment on what was perceived, in reference to shared goals. That’s why I always ask when visiting a class: Given your goals for the class, what would you like me to look for and perhaps count or code?

In my years of experience as a teacher of teachers, as an athletic coach, and as a teacher of adolescents I have always found such “pure” feedback to be accepted, not debated; and be welcomed (or at least not resisted). Performers are on the whole grateful for a 2nd pair of eyes and ears, given our blind spots as we perform. But the legacy of so much heavy-handed inferencing and gratuitous advice by past coaches/teachers/supervisors has made many performers – including teachers – naturally wary or defensive.

What effective coaches also know is that actionable feedback about what went right is as important as feedback about what didn’t work in complex performance situations. (That’s why the phrase “corrective information” is not strictly-speaking accurate in describing all feedback.) Performers need feedback about what they did correctly because they don’t always know what they did, particularly as novices. It is not uncommon in coaching, when the coach describes what a performer successfully did (e.g. “THAT time you kept your head still and followed all the way through!”), to hear the performer respond quizzically, “I did??”

Similarly the writer or teacher is sometimes surprised to learn that what she thought was unimportant in her presentation was key to audience understanding. Comedians, teachers, and artists don’t often accurately predict which aspects of their work will achieve the best results, but they learn from the ones that do. That’s why feedback can be called a reinforcement system: I learn by learning to do more of (and understand) what works and less of what doesn’t.

4. User friendly. Feedback is thus not of much value if the user cannot understand it or is overwhelmed by it, even if it is accurate in the eyes of experts or bystanders. Highly-technical feedback to a novice will seem odd, confusing, hard to decipher: describing the swing in baseball in terms of torque and other physics concepts to a 6-year-old will not likely yield a better hitter. On the other hand, generic ‘vagoo’ feedback is a contradiction in terms: I need to perceive the actionable, tangible details of what I did.

When I have watched expert coaches, they uniformly avoid either error of too much overly-technical information or of unspecific observations: they tell the performers one or two important things they noticed that, if they can be changed, will likely yield immediate and noticeable improvement (“I noticed you were moving this way…”), and they don’t offer advice until they are convinced the performer sees what they saw (or at least grasps the importance of what they saw).

5. Timely. The sooner I get feedback, then, the better (in most cases). I don’t want to wait hours or days to find out which jokes they laughed at or didn’t, whether my students were attentive, or which part of my paper works and doesn’t. My caveat – “in most cases” – is meant to cover situations such as playing a piano piece in recital: I don’t want either my teacher or the audience to be barking out feedback as I perform. That’s why it is more precise to say that good feedback is “timely” rather than “immediate.”

A great problem in education, however, is the opposite. Vital feedback on key performances often comes days, weeks, or even months after the performance – think of writing and handing in papers and getting back results on standardized tests. If we truly realize how vital feedback is, we should be working overtime as educators to figure out ways to ensure that students get more timely feedback and opportunities to use it in class while the attempt and effects are still fresh in their minds. (Keep in mind: as we have said, feedback does not need to come from the students’ teachers only or even people at all, before you say that this is impossible. This is a high-priority and solvable problem to address locally.)

6. Ongoing. It follows that the more I can get such timely feedback, in real time, before it is too late, the better my ultimate performance will be – especially on complex performance that can never be mastered in a short amount of time and on a few attempts. That’s why we talk about powerful feedback “loops” in a sound learning system.

All adjustment en route depends upon feedback and multiple opportunities to use it. This is really what makes any assessment truly “formative” in education. The feedback is “formative” not merely because it precedes “summative” assessments but because the performer has many opportunities – if results are less than optimal – to adjust the performance to better achieve the goal. Many so-called formative assessments do not build in such feedback use.

If we truly understood how feedback works, we would make the student’s use of feedback part of the assessment! It is telling that in the adult world I am often judged as a performer on my ability to adjust in light of feedback since no one can be perfect.

This is how all highly-successful computer games work, of course. If you play Angry Birds, Halo, Guitar Hero, or Tetris you know that the key to the substantial improvement possible is that the feedback is not only timely but ongoing. When you fail, you can immediately start over – even, just where you left off – to give you another opportunity to get, receive and learn from the feedback before all is lost to forgetfulness. (Note, then, this additional aspect of user-friendly feedback: it suits our need, pace and ability to process information; games are built to reflect and adapt to our changing ability to assimilate information.

Do you see a vital but counter-intuitive implication from the power of many ‘loops’ of feedback? We can teach less, provide more feedback, and cause greater learning than if we just teach. Educational research supports this view even if as “teachers” we flinch instinctively at this idea.. That is why the typical lecture-driven course is so ineffective: the more we keep talking, the less we know what is being grasped and attended to. That is why the work of Eric Mazur at Harvard – in which he hardly lectures at all to his 200 students but instead gives them problems to solve and discuss, and then shows their results on screen before and after discussion using LRS ‘clickers’ – is so noteworthy. His students get “less” lecturing” but outperform their peers not only on typical tests of physics but especially on tests of misconceptions in physics. [Mazur (1998)]

7. Consistent. For feedback to be useful it has to be consistent. Clearly, I can only monitor and adjust successfully if the information fed back to me is stable, unvarying in its accuracy, and trustworthy. In education this has a clear consequence: teachers have to be on the same page about what is quality work and what to say when the work is and is not up to standard. That can only come from teachers constantly looking at student work together, becoming more consistent (i.e. achieving inter-rater reliability) over time, and formalizing their judgments in highly-descriptive rubrics supported by anchor products and performances. By extension, if we want student-to-student feedback to be more helpful, students have to be trained the same way we train teachers to be consistent, using the same exemplars and rubrics.

References

Bransford et al (2001) How People Learn. National Academy Press.

Clarke, Shirley (2001) Unlocking Formative Assessment: Practical Strategies for Enhancing Pupils’ Learning in the Primary Classroom. Hodder Murray.

Dweck, Carol (2007) Mindset: The New Psychology of Success, Ballantine.

Gilbert, Thomas (1978) Human Competence.  McGraw Hill.

Harvard Business School Press, Giving Feedback (2006)

Hattie, John (2008) Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement, Routledge.

James, William (1899/1958) Talks to Teachers. W W Norton.

Marzano, R ; Pickering, D & Pollock J (2001) Classroom Instruction That Works: Research-Based Strategies for Increasing Student Achievement, ASCD.

Mazur, Eric (1996) Peer Instruction: A User’s Manual. Benjamin Cummings.

Nater, Sven & Gallimore R (2005) You Haven’t Taught Until They Have Learned: John Wooden’s Teaching Principles and Practices. Fitness Info Tech.

Pollock, Jane (2012) Feedback: the hinge that joins teaching and learning, Corwin Press.

Wiggins, Grant (2010) “Time to Stop Bashing the Tests,” Educational Leadership March 2010 | Volume 67 | Number 6

Wiggins, Grant (1998) Educative Assessment, Jossey-Bass.


[1] To be published in the September 2012 issue of Educational Leadership. Please do not disseminate without permission.

[2] Human Competence, Thomas Gilbert (1978), p. 178

[3] See Bransford et al (2001), pp. xx.

[4] James, William (1899/1958), p. 41.

This article was excerpted from a post that first appeared on Grant’s personal blogGrant can be found on twitter here; image attribution flickr user flickeringbrad; You Probably Misunderstand Feedback for Learning

Categories
Literacy

What Close Reading Actually Means

what-close-reading-actually-means

by Grant Wiggins, Ed.D, Authentic Education

On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. So today and tomorrow we’re going to share two of his posts on literacy, starting with what it means to “close read.” Per his usual, Grant took a deep dive on the topic, with lots of great examples.

What is close reading? As I said in my previous blog post, whatever it is it differs from a personal response to the text.

Here is what the Common Core ELA Standards say:

Students who meet the Standards readily undertake the close, attentive reading that is at the heart of understanding and enjoying complex works of literature. (p. 3)

What Close Reading Actually Means

Here is Anchor Standard 1:

Key Ideas and Details

1. Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text. (p. 10)

Here is how Nancy Boyles in an excellent Educational Leadership article defines it: “Essentially, close reading means reading to uncover layers of meaning that lead to deep comprehension.”

Thus, what “close reading” really means in practice is disciplined re-reading of inherently complex and worthy texts. As Tim Shanahan puts it in his helpful blog entry, “Because challenging texts do not give up their meanings easily, it is essential that readers re-read such texts,” while noting that “not all texts are worth close reading.”

The close = re-read + worthy assumption here is critical: we assume that a rich text simply cannot be understood and appreciated by a single read, no matter how skilled and motivated the reader.

The next five ELA anchor standards make this clearer: we could not possibly analyze these varied aspects of the text simultaneously:

    • 2. Determine central ideas or themes of a text and analyze their development; summarize the key supporting details and ideas.
    • 3. Analyze how and why individuals, events, and ideas develop and interact over the course of a text.
    • 4. Interpret words and phrases as they are used in a text, including determining technical, connotative, and figurative meanings, and analyze how specific word choices shape meaning or tone.
    • 5. Analyze the structure of texts, including how specific sentences, paragraphs, and larger portions of the text (e.g., a section, chapter, scene, or stanza) relate to each other and the whole.
    • 6. Assess how point of view or purpose shapes the content and style of a text.

College readiness and close reading. Since a key rationale for the Common Core Standards is college readiness, let’s have a look at how college professors define it. Here is what Penn State professor Sophia McClennen says at the start of her extremely helpful resource with tips on close reading:

“Reading closely” means developing a deep understanding and a precise interpretation of a literary passage that is based first and foremost on the words themselves. But a close reading does not stop there; rather, it embraces larger themes and ideas evoked and/or implied by the passage itself.

Here is how the Harvard Writing Center defines it:

When you close read, you observe facts and details about the text. You may focus on a particular passage, or on the text as a whole. Your aim may be to notice all striking features of the text, including rhetorical features, structural elements, cultural references; or, your aim may be to notice only selected features of the text—for instance, oppositions and correspondences, or particular historical references. Either way, making these observations constitutes the first step in the process of close reading.

The second step is interpreting your observations. What we’re basically talking about here is inductive reasoning: moving from the observation of particular facts and details to a conclusion, or interpretation, based on those observations. And, as with inductive reasoning, close reading requires careful gathering of data (your observations) and careful thinking about what these data add up to.

University of Washington handout for students summarizes the aim of close reading as follows:

The goal of any close reading is the following:

  • an ability to understand the general content of a text even when you don’t understand every word or concept in it.
  • an ability to spot techniques that writers use to get their ideas and feelings across and to explain how they work.
  • an ability to judge whether techniques the writer has used succeed or fail and an ability to compare and contrast the successes and failures of different writers’ techniques.

Remember—when doing a close reading, the goal is to closely analyze the material and explain why details are significant. Therefore, close reading does not try to summarize the author’s main points, rather, it focuses on “picking apart” and closely looking at the what the author makes his/her argument, why is it interesting, etc.

Here are a few of the helpful questions to consider in close reading, from the handout by  Kip Wheeler, a college English professor:

II. Vocabulary and Diction:

    • How do the important words relate to one another? Does a phrase here appear elsewhere in the story or poem?
    • Do any words seem oddly used to you? Why? Is that a result of archaic language? Or deliberate weirdness?
    • Do any words have double meanings? Triple meanings? What are all the possible ways to read it?

III. Discerning Patterns:

    • How does this pattern fit into the pattern of the book as a whole?
    • How could this passage symbolize something in the entire work? Could this passage serve as a microcosm, a little picture, of what’s taking place in the whole narrative or poem?
    • What is the sentence rhythm like? Short and choppy? Long and flowing? Does it build on itself or stay at an even pace? How does that structure relate to the content?
    • Can you identify paradoxes in the author’s thought or subject?
    • What is left out or silenced? What would you expect the author to say that the author seems to have avoided or ignored? What could the author have done differently—and what’s the effect of the current choice?

Of note is that in all these college examples the focus is on close reading as a prelude to writing. This is an important heads-up for students: close reading invariably is a means to an end in college, where the aim is a carefully-argued work of original thought about the text(s). And, in fact, the second part of Anchor Standard #1 makes this link explicit: the expectation is that students will communicate the fruits of their close reading to others in written and oral forms.

katerha-2

Close Reading vs. Reader Response

A key assumption implicit in all these quotes as well as in the Common Core – a controversial one, perhaps – is thus what I briefly argued in the previous post:  “close reading” has implicit priority over “reader response” views of the aim of literacy instruction. The reader’s primary obligation is to understand the text. That emphasis is clear from the anchor standards in the Common Core, as noted above: the goal is to understand what the author is doing and accomplishing, and what it means; the goal is not to respond personally to what the author is doing.

As I noted in my previous post, this does not mean, however, that we should ignore or try to bypass the reader’s responses, prior knowledge, or interests. On the contrary, reading cannot help but involve an inter-mingling of our experience and what the author says and perhaps means. But it does not follow from this fact that instruction should give equal weight to personal reactions to a text when the goal is close reading. On the contrary: we must constantly be alert to how and where our own prejudices (literally, pre-judging) may be interfering with meaning-making of the text.

Here is how the caution is cast in a college handout on close reading for students:

One word of caution: context needs to be examined with care. Don’t assume that the context of your own class or gender or culture is informing you correctly. Read context as actively and as rigorously as you read text!

This is especially true when reading rich, unusual, and controversial writings. Our job is to suspend judgment as we read – and be wary of projecting our own prior experience.

Let me offer one of my favorite sections of text to illustrate the point – two early sections from Nietzsche’s Beyond Good and Evil:

SUPPOSING that Truth is a woman–what then? Is there not ground for suspecting that all philosophers, in so far as they have been dogmatists, have failed to understand women–that the terrible seriousness and clumsy importunity with which they have usually paid their addresses to Truth, have been unskilled and unseemly methods for winning a woman?…

5. That which causes philosophers to be regarded half-distrustfully and half-mockingly, is not the oft-repeated discovery how innocent they are–how often and easily they make mistakes and lose their way, in short, how childish and childlike they are,–but that there is not enough honest dealing with them, whereas they all raise a loud and virtuous outcry when the problem of truthfulness is even hinted at in the remotest manner. They all pose as though their real opinions had been discovered and attained through the self-evolving of a cold, pure, divinely indifferent dialectic (in contrast to all sorts of mystics, who, fairer and foolisher, talk of “inspiration”), whereas, in fact, a prejudiced proposition, idea, or “suggestion,” which is generally their heart’s desire abstracted and refined, is defended by them with arguments sought out after the event. They are all advocates who do not wish to be regarded as such, generally astute defenders, also, of their prejudices, which they dub “truths,”–and VERY far from having the conscience which bravely admits this to itself, very far from having the good taste of the courage which goes so far as to let this be understood, perhaps to warn friend or foe, or in cheerful confidence and self-ridicule.

This is a classic close reading challenge: one has to read and re-read to make sense of things – even though all the words are familiar. And one has to put many prejudices and associations aside – about august philosophers, about scholarship, about “reason,” about truth and our motives in seeking it, about manhood! – to understand and appreciate what Nietzsche is driving at.

Oh, C’mon Grant: I Teach Little Kids

No matter. The same close reading needs to be done with every Frog and Toad story. Let’s consider my favorite, “Spring.” Frog wants Toad to wake up from hibernation to play on a nice April spring day. Toad resists all entreaties to wake up and play. The climax of the story comes here:

“But, Toad,” cried Frog, “you will miss all the fun!”

“Listen, Frog” said Toad.  “How long have I been asleep?”
“You have been asleep since November,” said Frog.
“Well then,” said Toad, “a little more sleep will not hurt me.  Come back again and wake me up at about half past May.  Good night, Frog.”
“But, Toad,’ said Frog, “I will be lonely until then.”
Toad did not answer.  He had fallen asleep.

Frog looked at Toad’s calendar.  The November page was still on top.
Frog tore off the November page.
He tore off the December page.
And the January page, the February page, and the March page.

He came to the April page.  Frog tore off the April page too.
Then Frog ran back to Toad’s bed.  “Toad, Toad, wake up.  It is May now.”

“What?” said Toad.  “Can it be May so soon?
“Yes,” said Frog.  “Look at your calendar.”

Toad looked at the calendar.  The May page was on top.
“Why, it is May!” said Toad as he climbed out of bed.

Then he and Frog ran outside to see how the world was looking in the Spring.

All sorts of interesting questions can re raised here – all of which demand a close (re-) reading:

    • Why did Frog try to wake Toad? How selfish or selfless was he being?
    • How did Frog eventually get Toad to get up? Why did he do that (i.e. trick him)?
    • Why didn’t the other attempts work to rouse Toad?
    • What convinced Toad? Why did it convince him?
    • Is Frog being a good friend here? Is Toad? (The title of the book, of course, isFrog and Toad Are Friends).

Notice that we could ask the following reader-response-like questions:

A. Have you ever been tricked like that, or tricked someone else? Why did you trick them or they trick you?

B. Do real friends trick friends? Is Frog really being a good friend here?

From my vantage point, however, in light of what we have said so far, the first question pair is less fruitful to consider – less ‘close’ –  than the second pair. The first pair takes you away from the text; the second pair takes you right back to the text for a closer read.

DeepCWind22

The Openness Required In Close Reading

Close reading, then, requires openness to being taught. Mortimer Adler and Charles Van Doren in their seminal text How To Read A Book make this issue of openness quite explicit at the outset. When the goal is understanding (instead of enjoyment or information only), we must assume that there is something the writer grasps that we do not:

The writer is communicating something that can increase the reader’s understanding… What are the conditions under which this kind of reading – reading for understanding –takes place? There are two. First, there is an initial inequality in understanding. The writer must be “superior” to the reader in understanding…second, the reader must be able to overcome this inequality in some degree…To the extent that this equality is approached, clarity of communication is achieved.

In short, we can only learn from our “betters.” We must know who they are and how to learn from them. The person who possesses this sort of knowledge possesses the art of reading.

The essence of such open reading is active questioning of the text. As the authors say, the “one simple prescription is… Ask questions while you read – questions that you yourself must try to answer in the course of reading.”

Here are the four questions at the heart of the book:

What is the book about as a whole? You must try to discover the leading theme of the book, and how the author develops this theme in an orderly way…

What is being said in detail, and how? You must try to discover the main ideas, assertions, and arguments that constitute the author’s particular message.

Is the book true, in whole or in part? You cannot answer this question until you have answered the first two. You have to know what is being said before you can decide whether it is true or not. When you understand a book, however, you are obligated to make up your own mind. Knowing the author’s mind is not enough.

What of it? If the book has given you information, you must ask about its significance. Why does the author think it is important to know these things? Is it important to you to know them?

Note the caution: you shouldn’t jump to judging the merit or significance of the work before understanding it – a maxim of close reading.

The bulk of the book describes dozens of practical tips, with examples, for how to annotate texts and develop better habits of active reading in pursuit of the answers to these reader questions. I can heartily recommend How To Read a Book as one the best resources ever written for learning close reading. Hard to argue with the facts: written in 1940 and a longtime best-seller, it has had over 30 printings and is still used today.

Most importantly, to yours truly, How To Read a Book taught me how to read properly. It was in a brief skim of Adler’s book, while lounging in a friend’s dorm room when I was a junior at St. John’s College – the Great Books school – that I realized with a terrible shock that I had never really learned how to read actively and carefully up until that moment. The book changed my life: I became more skilled, confident, and willing as a reader; I went into teaching in part motivated by the simple yet powerful lessons taught me about the joys of reading and thinking in the book.

What St. John’s also taught me is the power of so-called Socratic Seminar – the way all of our classes were run – for learning close reading. Indeed, that’s all a good seminar is: a shared close reading of a complex text in which students propose emerging understandings, supported by textual evidence, with occasional reminders and re-direction by teacher-facilitators.

So, ELA and English teachers – and history, math, art, and science teachers too: let’s teach kids the joys that come from discerning the richness in a great text, be it Frog and Toad, Plato’s Apology, Euclid’s Elements, or Picasso’s Guernica. I think you’ll be surprised how much a wise text can teach and reach even the most unruly kid – and, in the end, make them feel wiser, too.

This post first appeared on Grant’s personal blog; image attribution flickr users katerha and deepcwind

Categories
Literacy

Text Complexity? Helping Readers See The Whole Text

graphic-notes-2Text Complexity? Helping Readers See The Whole Text

by Grant WigginsAuthentic Education

Selecting Text For Comprehension

In the previous literacy posts in this series I identified a few guiding questions that stem from the research:

  1. Do students understand the real point of academic reading?
  2. Do students understand that the aim of instruction is transfer of learning?
  3. Am I using the right texts for making clear the value of strategies?
  4. Do students understand the difference between self-monitoring understanding and knowing what they might do when understanding does not occur?
  5. Am I attending to the fewest, most powerful comprehension strategies for academic literacy?
  6. Am I helping them build a flexible repertoire instead of teaching strategies in isolation?
  7. Do students have sufficient general understanding of the strategies (which is key to transfer)?
  8. Am I doing enough ongoing formal assessment of student comprehension, strategy use, and tolerance of ambiguity?

In this post we consider question #3, on the appropriate texts to use to develop text comprehension.

The Challenge Of A Common Language

I began this series by reminding readers that NAEP results show flat scores and far too weak results on text comprehension over 30 years, in middle and high school. (The gains have come in lower grades in terms of basic decoding and literal reading). Questions on “main idea” and “author purpose” on state tests also reveal this problem over a long time frame, as I noted in looking at some past test questions and item analysis.

Sitting in on numerous classes over the years reveals a key source of the problem: students are rarely expected to read a multi-page complete non-fiction text and be assessed on their grasp of it as a whole. Rather, most large-group instruction or reader-workshop mini-lessons involve small bits of text, typically no more than a few paragraphs. How can you possibly develop comprehension ability of a text this way?

Such bits of learning can lead to absurd lessons. A well-known and highly-regarded Toolkit – even by me – offers this teacher-script for the lesson on how to distinguish importance:

“To make it easier to sort through all the facts we are learning, let’s look at this three-column form. There are columns for Important Information, Interesting Details, and, of course, My Thinking. In the first column we’ll record the important things we want to remember about the topic. But sometimes it’s those interesting details that really engage us. We can add some of those in the second column…”

How can you judge importance without grasping the purpose? Nowhere is the criterion of “important to remember” discussed. To understand a text and to therefore judge what is “important information” you have to know the author’s purpose and the main ideas of a text. You simple cannot identify what is “important” vs. “merely interesting” by reading a brief excerpt. This leads them to leave the text, in fact: “things we want to remember.” But what if that was not at all what the author was trying to say?

Worse, this is a deficient categorization: a detail could be both interesting and important. In fact, students in the provided transcript get hung up on this point in a few cases! Finally, how would any reader judge what is “important to remember” without asking the question: “Important to remember for what purpose?” (Another serious deficiency in the advice to teachers is that the overview of the lesson talks about learning to find important ideas, but that gets turned into important information in the text of the lesson as the excerpt, above, shows. This confusion about facts vs ideas is rampant in many lessons I have witnessed.)

In other words, when you read only a brief excerpt from a text, there is no practical difference between important vs. supporting information, between summary and message. Thus, the vital distinctions between topic, main idea, and summary get blurred. I have watched students get completely confused about these concepts because the brief resources and lessons easily led to muddled thinking. Indeed, I have heard more than a few teachers equate main idea and summary at different points in their teaching.

externus-fiA Counter-Intuitive Choice Of Texts

Thus, we need to do something unobvious in our reading choices: we must choose complete fiction and non-fiction texts that can be easily read and grasped literally by all students, so that summarizing is easy; yet, be texts in which the main ideas are not obvious. Otherwise, there is little use for true comprehension, specific strategies, or distinctions between ideas and information. (Most blog readers who took the Kant test in Post #1 experienced this tension at their own reading level.)

If I were teaching 7th grade ELA, therefore, I would begin my year with Aesop’s Fables. The whole point of each Fable is an explicit “moral of the story” – a general life lesson stated at the end of the tale. We would start by reading one or two in which the moral is provided and modeling of analysis, then students would be asked to generate the moral of a few stories on their own, in a gradual release way. The text is easy; the inferring is challenging. You could also use very easy readings from much earlier grades, including fairy tales and short non-fiction books – with the added virtue that struggling readers would start off on the right foot since the texts and discussions would be accessible.

Another recurring “text” would be New Yorker and editorial cartoons, and familiar but rich song lyrics to help students understand that the text’s message may nowhere be stated explicitly – that even in very short texts inference is essential. (As I have written before, calling “inference” a strategy is categorically wrong: reading for meaning is all about inference.) I would also have them read a few satires, such as The REAL Story of the 3 Little Pigs? by A. Wolf. Satire has the virtue of painting a sharp contrast between topic, summary, and the author’s point. All of these early moves would build clarity of goal – understand by making meaning of the whole and see how the parts support the whole – and confidence in all readers.

Further along in the year, there would be paired non-fiction and fiction readings in which the topic was the same but each author’s point was different. Students would be asked to compare and contrast regularly. Essential questions would frame cross-text debate and regular Socratic Seminars. (In the ASCD DVD on Essential Questions, you can see me leading a seminar with high-schoolers using readings and activities linked by the EQ:Who Sees? Who is Blind?)

One of my favorite moves in terms of matched non-fiction readings was to have students read selections from the history textbooks of other countries. Here is a selection from one on the Revolutionary era:

What then were the causes of the American Revolution? It used to be argued that the Revolution was caused by the tyranny of the British government in the years following the Seven Years War. This view is no longer acceptable. Historians now recognize that the British colonies were the freest in the world…

The French menace was removed after 1763 and the colonies no longer felt dependent on England’s aid. This did not mean that they wished for independence. The great majority of the colonists were loyal, even after the Stamp Act. They were proud of the Empire and its liberties…In the years following the Stamp Act a small minority of radicals began to work for independence. They watched for every opportunity of stirring up trouble….The radicals immediately seized the opportunity of making a crisis and in Boston it was this group who staged the Boston Tea Party…. In the Thirteen Colonies the Revolution had really been a civil war in which the whole population was torn with conflicting loyalties. John Adams later said that in 1776 probably not more than one-third of the people favored war.

Where is this from? A Canadian textbook! Pair it with the relevant section from the students’ History textbook in 8th grade, and you have a recipe for engaged reading for meaning – indeed, further research. (I have done similar things in science by having students read Ptolemy’s proof that the earth is stationary and at the center of the universe.)

It is vital, therefore, to assess progress in understanding the whole of a text. I would ask students each week to title an article read (in which I had removed or covered the actual title) and justify the choice of title. I would supplement this activity with similar titling questions from released state and national tests (since such items are often used to test for understanding of main idea/author purpose). And there would be a regular cold read and short-answer test on the main idea of a non-fiction article. (None of these need be deemed formal grades until second semester).

Students would thus need to be taught and constantly practice a rudimentary logic: What’s the conclusion, the point? How do you infer this? How did the author take us there, i.e. what are the key pieces in the argument that supposedly support the conclusion?

Without understanding rudimentary logic it is almost impossible to understand the difference between “important” and “unimportant” parts of a text; and it is almost impossible to read beyond a word to word approach, which research shows undercuts understanding. Nor is it possible to meet the argument-related standards at the heart of the ELA Common Core standards.

Less Is More? Comprehension Strategies

Once students understood fully that their job is to think about what they read so that they understand the “logic” of a whole text, I would present students with texts that demand care in thinking as they read where the message is obscured.

Some obvious secondary-level text candidates include: Motel of the Mysteries, the mythic anthropological study of the “Nacirema” tribe, editorials and op-ed essays with unusual views on controversial topics. Poems are obvious candidates; so are puzzling allegories like The Lottery, Plato’s Allegory of the Cave, and the math story Flatland – all highly thought-provoking readings, though relatively easy to grasp at a surface level. But we need many more good examples of nonfiction than we have at present, in which the goal is not to learn information but ponder important ideas and arguments. Otherwise, there is far too little need to invoke any strategies.

As for reading strategies, I would use a very small set, as noted in the key research mentioned in previous posts. In addition to heavy attention to metacognitive self-monitoring (to be discussed in a later post), I would highlight questioning, summarizing, and outlining the logic.

In short, we worry too much about Lexile scores and “grade-level texts” and not enough about designing backward from our goal of text comprehension via intellectually-challenging whole readings that elicit thought and thus a need for strategies. Yes, I know what the Standards say about text difficulty; that’s a goal. But I am quite confident that – paradoxically – we would be more likely to meet grade-level standards in the end, by starting off with easier below-grade-level complete texts worthy of reading and thinking about. Otherwise, we quickly overwhelm and lose struggling readers with too-difficult text and a grab-bag of too many strategies.

I welcome suggestions from readers about non-fiction complete texts that have worked for them, in helping students to become better comprehenders via close-reading strategies.

Further Resources

Four books written for teachers stand out for me as helpful resources in this challenge: Notice and Note by Beers and Probst; Teaching Argument Writing by Hillocks, Deeper Reading by Kelly Gallagher, and the previously mentioned Questioning the Author by Beck et al. These books, written for secondary level teachers, are chock full of sensible advice and helpful tools for readers to use.

But by far the best book for learning to read intellectually challenging books is a classic:How to Read a Book by Mortimer Adler and Charles van Doren. This book transformed me as a college student from a lazy to an active and more careful reader, and many of my students have told me that this book was a life-saver for them as well when they went to college.

Adapted image attribution flickr user externus; This article was excerpted from a post that first appeared on Grant’s personal blog; Grant can be found on twitter here

Categories
Learning

Experiential Learning: Just Because It’s Hands-On Doesn’t Mean It’s Minds-On

Experiential Learning: Just Because It’s Hands-On Doesn’t Mean It’s Minds-On

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

by Grant WigginsAuthentic Education

I recently visited Thetford Academy in Vermont (one of the few and interesting public-private academies in New England) where they have a formal and explicit commitment to “experiential learning.” So, the leaders of the school asked me to visit classes that were doing experiential learning and to talk with staff at day’s end about it.

I saw some great examples of such instruction. I visited the design tech course (see photos) and the class on the Connecticut River where students were learning about soil types prior to a wetlands field trip.

2013-10-17 12.39.04

2013-10-17 12.51.09

I also spent the previous day at the Riverdale School where all 9th graders were learning the skills and habits of innovation and entrepreneurship as part of a cool new project headed by John Kao, former Harvard Business School innovation guru. (I am a consultant to the Edgemakers project).

Below are some pictures from the “Design a better backpack exercise” that started the work of the day.

2013-10-16 13.48.10

2013-10-16 13.49.28

Just because it’s hands-on doesn’t mean it’s minds-on. But the gist of my remarks at Thetford was to propose caution. Just because work is hands-on does not mean it is minds-on. Many projects, problems, situations, and field trips do not yield lasting and transferable learning because too little attention is given to the meta-cognitive and idea-building work that turns a single experience into insight and later application.

Years ago when I worked as a consultant at School Without Walls in Rochester NY (one of the first really interesting alternative High Schools to emerge from the 60s and a member of the Coalition of Essential Schools), they put it very succinctly in their caution about all the independent projects students routinely did. If you were going to learn carpentry to build a chair, then “The learning is not the chair; it is the learning about learning about chairs, chair-making and oneself.”

I have also often used the following soccer example, because it makes the same point beautifully and practically. Merely playing the game over and over need not cause understanding and transfer. It takes a deliberate processing of the game experience, as summarized in the powerful approach used by my daughter’s high school coach a few years back. Instead of talking on and on at players at half-time, Griff asked 4 key questions of players:

      • What’s working for us?
      • What’s not working for us?
      • What’s working for the other team?
      • So, what do we have to do in the 2nd half?

My daughter (now a starter at Stony Brook University) has often remarked that Griff was really the only coach through HS that taught her to ‘think soccer’ and it paid off in her growth and the team’s success.

As a coach of soccer, baseball, and Socratic Seminar, I learned this lesson the hard way many times myself. I often over-estimated student understanding as to the purpose of activities and assignments, and the important learnings from the experiences. My teaching became far more focused and effective when I forced kids to be metacognitive and reflective about what had been achieved against goals. So, for example, 30 years ago I used a variant of Griff’s questions towards the end of each Socratic Seminar:

      • What have been the highlights?
      • What have been the rough spots?
      • What do we now understand?
      • What do we still not understand?
      • Whose voices didn’t we hear? Why?

With the Thetford staff I prompted a focused discussion in a 2-part exercise: What is the difference between effective and ineffective experiential learning? What are the key indicators to look for in judging whether your attempt at experiential learning is working? (Hint: mere engagement is NOT sufficient.) You might try this exercise locally.

The answers are not surprising but worth committing to. One of the most frequent answers is a clear and specific sense of purpose, linking the activity to the WHY? question – We’re doing this becauseWe’re learning this because… etc. The other common answer is that the activity needs to be processed in terms of what was and wasn’t learned. (It is key that students explain this independently. Many teachers think that just because they may have said something about purpose at the start that therefore students can answer these questions later on. It is often not the case.)

A third optional part of the exercise is to share examples of the most powerful experiential learning in one’s own experience as a learner to provide a check and to go beyond the earlier answers.

I always ask all kids when I visit class the three questions at the heart of this caution:

  • What are you doing?
  • Why are you doing it?
  • What does this help you do that’s important?

Alas, many kids do not provide adequate answers. And that’s why we need to worry about merely hands-on learning – even as hands-on learning is vital for making abstractions come to life.

This article was excerpted from a post that first appeared on Grant’s personal blog; Grant can be found on twitter here; Experiential Learning: Just Because It’s Hands-On Doesn’t Mean It’s Minds-On; image attribution flickr user nasagoddardacademy

Categories
Teaching

Assessment: Why Item Analyses Are So Important

Assessment: Why Item Analyses Are So Important

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

by Grant WigginsAuthentic Education

As I have often written, the Common Core Standards are just common sense – but that the devil is in the details of implementation. And in light of the unfortunate excessive secrecy surrounding the test items and their months-later analysis, educators are in the unfortunate and absurd position of having to guess what the opaque results mean for instruction. It might be amusing if there weren’t personal high stakes of teacher accountability attached to the results.

So, using the sample of released items in the NY tests, I spent some time this weekend looking over the 8th grade math results and items to see what was to be learned – and I came away appalled at what I found.

Readers will recall that the whole point of the Standards is that they be embedded in complex problems that require both content and practice standards. But what were the hardest questions on the 8th grade test? Picayune, isolated, and needlessly complex calculations of numbers using scientific notation. And in one case, an item is patently invalid in its convoluted use of the English language to set up the prompt, as we shall see.

As I have long written, there is a sorry record in mass testing of sacrificing validity for reliability. This test seems like a prime example. Score what is easy to score, regardless of the intent of the Standards. There are 28 8th grade math standards. Why do such arguably less important standards have at least 5 items related to them? (Who decided which standards were most important? Who decided to test the standards in complete isolation from one another simply because that is psychometrically cleaner?)

Here are the released items related to scientific notation:

Screen Shot 2014-11-24 at 9.11.40 AMScreen Shot 2014-11-23 at 8.40.04 AMScreen Shot 2014-11-23 at 8.41.43 AMScreen Shot 2014-11-23 at 8.40.48 AMbad english saturn Screen Shot 2014-11-14 at 6.26.31 PM

It is this last item that put me over the edge.

The item analysis. Here are the results from the BOCES report to one school on the item analysis for questions related to scientific notation. The first number, cast as a decimal, reflects the % of correct answers statewide in NY. So, for the first item, question #8, only 26% of students in NY got this one right. The following decimals reflect regional and local percentages for a specific district. Thus, in this district 37% got the right answer, and in this school, 36% got it right. The two remaining numbers thus reflect the difference between the state score for the district and school (.11 and .10, respectively).

#22 Screen Shot 2014-11-17 at 4.48.16 PM#14 Screen Shot 2014-11-17 at 4.49.00 PM

#13 Screen Shot 2014-11-17 at 4.49.14 PM#11 Screen Shot 2014-11-17 at 4.49.25 PM#08 Screen Shot 2014-11-17 at 4.49.42 PM

Notice that, on average, only 36% of New York State 8th graders got these 5 questions right, pulling down their overall scores considerably.

Now ask yourself: given the poor results on all 5 questions – questions that involve isolated and annoying computations, hardly central to the import of the Standards – would you be willing to consider this as a valid measure of the Content and Process Standards in action? And would you be happy if your accountability scores went down as a teacher of 8th grade math, based on these results? Neither would I.

There are 28 Standards in 8th grade math. Scientific Notation consists of 4 of the Standards. Surely from an intellectual point of view the many standards on linear relationships and the Pythagorean theorem are of greater importance than scientific notation. But the released items and the math suggest each standard was assessed 3-4 times in isolation prior to the few constructed response items. Why 5 items for this Standard?

It gets worse. In the introduction to the released tests, the following reassuring comments are made about how items will be analyzed and discussed:

explain commentary intro Screen Shot 2014-11-15 at 9.10.13 AM

Fair enough: you cannot read the student’s mind. At least you DO promise me helpful commentary on each item. But note the third sentence: The rationales describe why the wrong answer choices are plausible but incorrect and are based on common errors in computation. (Why only computation? Is this an editorial oversight?) Let’s look at an example for arguably the least valid questions of the five:

bad english saturn Screen Shot 2014-11-14 at 6.26.31 PM

Oh. It is a valid test of understanding because you say it is valid. Your proof of validity comes from simply reciting the standard and saying this item assesses that.

Wait, it gets even worse. Here is the “rationale” for the scoring, with commentary:

Screen Shot 2014-11-15 at 9.12.20 AM copy

Note the difference in the rationales provided for wrong answers B and C: “may have limited understanding” vs. “may have some understanding… but may have made an error when obtaining the final result.”

This raises a key question unanswered in the item analysis and in the test specs. Does computational error = lack of understanding? Should Answers B and C be scored equal? (I think not, given the intent of the Standards). The student “may have some understanding” of the Standard or may not. Were Answers B and C treated equally? We do not know; we can’t know given the test security.

So, all you are really saying is: wrong answer.

Answers A, B, C are plausible but incorrect. They represent common student errors made when subtracting numbers expressed in scientific notation. Huh? Are we measuring subtraction here or understanding of scientific notation? (Look back at the Standard.)

Not once does the report suggest an equally plausible analysis: students were unable to figure out what this question was asking!!! The English is so convoluted, it took me a few minutes to check and double-check whether I parsed the language properly:

bad english saturn Screen Shot 2014-11-14 at 6.26.31 PM

Plausible but incorrect… The wrong answers are “plausible but incorrect.” Hey, wait a minute: that language sounds familiar. That’s what it says under every other item! For example:

plaus incorr - linear  Screen Shot 2014-11-15 at 9.11.56 AMScreen Shot 2014-11-23 at 9.11.51 AM

All they are doing is copying and pasting the same sentence, item after item, and then substituting in the standard being assessed!!  Aren’t you then merely saying: we like all our distractors equally because they are all “plausible” but wrong?

Understanding vs. computation. Let’s look more closely at another set of rationales for a similar problem, to see if we see the same jumbling together of conceptual misunderstanding and minor computational error. Indeed, we do:

Screen Shot 2014-11-23 at 9.41.36 AM

Look at the rationale for B, the correct answer: it makes no sense. Yes, the answer is 4 squared which is an equivalent expression to the prompt. But then they say: “The student may have correctly added the exponents.” That very insecure conclusion is then followed, inexplicably, by great confidence: “A student who selects this response “understands the properties of integer exponents…” – which is of course, just the Standard, re-stated. Was this blind recall of a rule or is it evidence of real understanding? We’ll never know from this item and this analysis.

In other words, all the rationales are doing, really, is claiming that the item design is valid – without evidence. We are in fact learning nothing about student understanding, the focus of the Standard.

Hardly the item analysis trumpeted at the outset.

Not what we were promised. More fundamentally, these are not the kinds of questions the Common Core promised us. Merely making the computations trickier is cheap psychometrics, not an insight into student understanding. They are testing what is easy to test, not necessarily what is most important.

By contrast, here is an item from the test that assesses for genuine understanding:

Screen Shot 2014-11-23 at 8.42.18 AM

This is a challenging item – perfectly suited to the Standard and the spirit of the Standards. It requires understanding the hallmarks of linear and nonlinear relations and doing the needed calculations based on that understanding to determine the answer. But this is a rare question on the test.

Why should the point value of this question be the same as the scientific notation ones?

In sum: questionable. This patchwork of released items, bogus “analysis” and copy and paste “commentary” give us little insight into the key questions: where are my kids in terms of the Standards? What must we do to improve performance against these Standards?

My weekend analysis, albeit informal, gives me little faith in the operational understanding of the Standards in this design, without further data on how item validity was established, whether any attempt was made to carefully distinguish computational from conceptual errors in the design and scoring- and whether the tentmakers even understand the difference between computation and understanding.

It is thus inexcusable for such tests to remain secure, with item analysis and released items dribbled out at the whim of the DOE and the vendor. We need a robust discussion as to whether this kind of test measures what the Standards call for, a discussion that can only occur if the first few years of testing lead to a release of the whole test after it is taken.

New York State teachers deserve better.

This article first appeared on Grant’s personal blog; Grant can be found on twitter here; Assessment: Why Item Analyses Are So Important

Categories
Education

12 Mistakes Schools Make When Introducing The Next Big Thing

12 Mistakes Schools Make When Introducing The Next Big Thing

by Grant Wiggins

Ed note: This post by Grant focuses on mistakes schools make when introducing Understanding by Design in schools. Certainly for that focus, it makes sense as Grant and Jay McTighe designed the framework and would be considered a credible source on how to mess it up. But it also make sense as an example in the kinds of mistakes schools make when introducing any new “big thing”–classroom management, curriculum, PD, etc., so we’ve revised it a little not only explore how to implement UbD poorly, but any new idea poorly.

Sigh. Despite our cautions, well-meaning local change agents continue to make mistakes in how Understanding by Design (UbD) is implemented. Below, find 12 ways of killing the effort for sure, and some suggestions for how to avoid the all-too-common mistakes. While I wrote this for UbD it applies to any initiative.

1. Fixate on terminology and boxes in the template and provide little or no insight into the issues and purposes that underlie UbD.

INSTEAD:

  • Start with common sense through an exercise: “You really understand if you can…” and use staff answers as the basis for initial experiments in understanding-focused learning.
  • Delay showing all the Template boxes with all their names.
  • Concentrate on making clear that the aim is a better focus on understanding as opposed to superficial coverage
  • Use whatever language makes sense locally to make the process and design tools transparent

2. Mandate that every teacher must use it (UbD) for ALL of their planning immediately (without sufficient training, on-going support, or structured planning time).

INSTEAD: Think big, but start small and smart –

  • Work with volunteers at first
  • Ask all teachers to plan ONE unit in Year One.
  • Encourage teachers to work w/ a colleague or team, and begin w/ a familiar unit topic.
  • Provide additional designated planning and peer review time.
  • Provide online help

3. Introduce it (UbD) immediately as this year’s focus to suggest that UbD can be fully implemented in a year, and that last year’s initiative bears no relation to it. Thus: This, too, shall pass.

INSTEAD: Develop and publish a multi-year plan that links your long-term goals to UbD strengths, and shows how UbD will be slowly implemented as part of a complete strategic plan.

4. Attempt to implement too many initiatives simultaneously (e.g., UbD, Differentiated Instruction, Curriculum Mapping, Marzano’s “Strategies” etc.)

INSTEAD: Develop a multi-stage multi-year plan to improve current initiatives via UbD –

  • improve mapping categories
  • differentiate via Essential Questions
  • unpack Standards to identify transfer goals
  • develop a 1-page graphic showing how all local initiatives are really a part of the same one effort (e.g. limbs of a tree, pieces of a puzzle, supports of a building, etc.)

5. Assume that staff members understand the need for it (UbD) and/or will naturally welcome it. i.e. hurriedly prescribe UbD before helping staff to understand and appreciate the need for change – ensuring that they do not own the change.

INSTEAD: Establish the need for a change – the diagnosis – before proposing UbD as a prescription. Make sure that staff see UbD as a logical response to a deficit or opportunity that they recognize and own.

6. Provide one introductory presentation on it (UbD) and assume that teachers now have the ability to implement UbD well.

INSTEAD: Design professional development “backward” from your understanding goals, i.e. practice what UbD preaches –

  • Make staff meetings and walk-throughs devoted to UbD learning and trying out
  • Help PLCs develop action plans for trying out unit ideas while also reading further on unit design and how people learn.
  • Use annual personal goals (SLO’s, SGOs, etc.) as the action research ground for the year, based on understanding goals.

7. Provide UbD training for teachers, but not for administrators; give leaders and supervisors the same training as teachers.

INSTEAD:

  • Establish parallel tracks of training for Principals and Asst. Principals in which they work on how to look for elements of UbD in action. (They do not need training in how to design units, only how to offer feedback)
  • Develop peer review systems so that teachers and administrators work together in informally and formally giving feedback to units
  • Develop supervisory teams to develop a UbD approach to curriculum writing

8. Provide minimal UbD training for some willing teachers in a Train-the-Trainers program, then expect immediate and effective turn-key training of all other staff by those few pioneers.

INSTEAD:

  • Establish a process for carefully soliciting, interviewing, testing, and hiring would-be trainers.
  • Develop a year-long training program
  • Support trainers with on-line and in-person troubleshooting

9. Train people in Stage 1 in Year 1, Stage 2 in Year 2, Stage 3 in Year 3 – insuring that no useful results will occur for years, and the big picture is rarely seen.

INSTEAD: Train so that designers have tried out a few unit strands through all 3 Stages (e.g. just a design based on 1 Essential Question) at least twice in year One, then a full-blown unit by year’s end.

10. Announce it is the official way to (insert functions it’s not good for here). For example, for teachers to use UbD to plan all lessons from here on, even though UbD is not a lesson-plan system.

INSTEAD:

  • Make clear that UbD focuses on unit planning.
  • Provide differentiated freedom in how people write lessons
  • Perhaps make elements of Stages 1 & 2 mandatory, but leave Stage 3 open to personal bent and creativity

11. Standardize all implementation and experimentation. Don’t permit options/alternatives/different approaches to learning, trying, and using ubd. Don’t play to any particular interests, talents, and readiness of staff.

INSTEAD: Differentiate the UbD work –

  • Build in choices of role (trainers/designers/piloters/observers),
  • Try out simpler as well as full versions of the Template, based on readiness
  • Build a schedule that permits others to join in with R & D later, on a rolling timeline

12. Be thoughtless with the starting point 

INSTEAD: start with units that are not engaging and effective currently. What do you have to lose?

This is an updated version of material that can be found in Schooling by Design and TheUbD Advanced Guide to Unit Design. Both books have many other ideas for how to plan reform to avoid these errors.

Categories
Teaching

Why The Common Core Will Fail

vancouverfilmschool-social-changeWhy The Common Core Will Fail

by Grant WigginsAuthentic Education

The current strong backlash to the Standards is completely predictable. Any time there is a major push to reform an institution there will be a backlash. As we saw with Obamacare, the flaws become magnified and politicized; the supposed benefits seem to many not worth the hassle and the flaws – and it gets heavily politicized (i.e. people lie about the harm of it).

The Common Core suffers from two additional problems: 1) there is no major group of august leaders in varied fields serving as a lobbying group attached to the effort. It all feels like a faceless bureaucratic thing so hated by Americans – akin to the IRS; and 2) the Standards are easy to conflate with standardized testing (and accountability systems based on them) which naturally seems like a step backward, not forward, in improving education nationally. (I eagerly look forward to the teacher lawsuit concerning her weird VAM score from the state of NY.)

However, for me there are two far bigger problems with the Standards themselves – errors that arguably caused the bulk of the current backlash. The writers of the Standards (especially in Math) did a terrible job of 1) justifying the Standards as appropriate to college and workplace readiness, and 2) explaining in detail what the Standards imply for educational practice. The documents simply fail at communicating the kinds of changes the Standards demand locally.

And most state education departments should be viewed as co-defendants in this mess. (One notable exception is math in Georgia – the most forward-looking state curriculum in the country – built, ironically, with RTTT money). This has naturally led to total confusion at the local level as to what counts as valid Common-Core-Based schooling. (I’ll have more to say about this local confusion and how to address it in a follow-up post).

1. No important heavyweight groups are lobbying for this cause in the Standards; no data is presented to justify the Standards. Where, oh where, in the Standards’ introductory materials are quotes and data from University Presidents talking about woeful remediation rates and the dumbing down of college courses necessitated by unprepared admittees? Where are quotes and data from the key people from tech, bio-medical, media, and manufacturing companies standing together to plead for better prepared workers? Without hard data (or a human face) to this document the Standards will die. Worse, then the Standards can be made to mean anything – including dumb local homework viral tweets – in the absence of a strong initial and ongoing PR effort to explain what the Standards are and what they are not.

2. We don’t know what the Standards imply. This latter oversight is particularly important since it makes it far more difficult to understand how the Standards do not dictate specific curricular or instructional practices. In short, the Standards mean little without models to support them. And without models of instruction, curriculum and assessment that propose a great vision, the Standards are just ambiguous words, able to mean whatever friend or foe wants them to mean.

There is no reason why a highly innovative curriculum could not meet the Standards: the all-elective system I taught in 40 years ago would have met the Standards, given its rigor; and modern-day Problem-Based Learning systems such as at High Tech High and the completely problem-based math curriculum at Exeter [click on Teaching materials] easily meet the Standards.

In the absence of seeing such innovative approaches to addressing the Standards, many if not most schools are doing the opposite: making education more test-prep, less imaginative; more depressing, timid, retrograde – dreadful. And so, of course, people naturally blame the Standards for this ugly turn.

But standards need not have this effect and in other fields do not have this effect; Standards do not mean standardization, as I wrote 25 years ago. The Building Code does not prevent architects from designing imaginative and client-pleasing house designs; the FDA rules on freshness and nutrition do not undercut the ability of farmers and food producers to offer consumers wonderful new food choices. And the same can be true with educational standards. There are as many ways to meet them as there are educational visions. Alas, visions are in woefully short supply.

The Standards should have provided them. Concretely what this means is two things:

  1. The Standards Documents should have contained dozens of scenarios and case studies of classrooms, schools, and districts where the curriculum, instruction, and local assessment are addressing the Standards validly in interesting and exciting ways.
  2. The documents should have clearly and thoroughly stated which time-honored practices would address the Standards validly and which time-honored practices would not – with explanations as to why. This is truly where the state departments of education are derelict: there are no real guidelines for judging the work underway locally to determine if local educators are on track to meet the Standards – especially in the key area of local assessment and grading.

Yes, I have read the Appendices in ELA; Yes, I have noted the ways in which the Math Standards discuss (very briefly) how Practice and Content Standards might be woven together. All far too little and vague. Consider this meaningless paragraph in the introduction to the Math Standards:

These Standards are not intended to be new names for old ways of doing business.They are a call to take the next step. It is time for states to work together to build on lessons learned from two decades of standards based reforms. It is time to recognize that standards are not just promises to our children, but promises we intend to keep.

In short, the Standards ironically fail to help their own cause by the absence of a good argument, based on good evidence, to support them and their implications – despite the constant highlighting of the value of arguments in the Standards!

When the whole thing collapses and devolves back to the states and consortia of states, as it likely will, it may satisfy the critics but it will leave us right back where we were before: with no clear vision of a modern education; an absence of clarity about how to address our longstanding problems vis a vis student engagement, instructional depth, assessment rigor, local tactics that violate best practice, and too many schools that deceive its students into thinking that they are college and workplace ready.

This article was excerpted from a post that first appeared on Grant’s personal blog; Grant can be found on twitter here; image attribution flickr user vancouverfilmschool

Categories
Critical Thinking

Chasing The Definition Of An Academic Argument

Ed note: On May 26, 2015, Grant Wiggins passed away. Grant was tremendously influential on TeachThought’s approach to education, and we were lucky enough for him to contribute his content to our site. Occasionally, we are going to go back and re-share his most memorable posts. This is one of those posts. Thankfully his company, Authentic Education, is carrying on and extending the work that Grant developed.

Chasing The Definition Of An Academic Argument

by Grant Wiggins

The Common Core Standards make crystal clear that college and professional workplace readiness demand student ability to read and write arguments. Indeed, while identifying the three genres of writing in the Anchor Standards, Appendix A stresses the priority of argument:

While all three text types are important, the Standards put particular emphasis on students’ ability to write sound arguments on substantive topics and issues, as this ability is critical to college and career readiness.

What many educators do not fully understand, however, is that the Standards define argument in the narrower sense found in logic rather than in the colloquial sense. Although many people think that an argument is a one-sided attempt at persuading somebody of something, using whatever rhetorical tricks they can muster, an academic argument is more like a scientific paper that aims at understanding, not one-ups-manship:

English and education professor Gerald Graff (2003) writes that “argument literacy” is fundamental to being educated. The university is largely an “argument culture,” Graff contends; therefore, K–12 schools should “teach the conflicts” so that students are adept at understanding and engaging in argument (both oral and written) when they enter college. He claims that because argument is not standard in most school curricula, only 20 percent of those who enter college are prepared in this respect. Theorist and critic Neil Postman (1997) calls argument the soul of an education because argument forces a writer to evaluate the strengths and weaknesses of multiple perspectives. When teachers ask students to consider two or more perspectives on a topic or issue, something far beyond surface knowledge is required: students must think critically and deeply, assess the validity of their own thinking, and anticipate counterclaims in opposition to their own assertions.

The unique importance of argument in college and careers is asserted eloquently by Joseph M. Williams and Lawrence McEnerney (n.d.) of the University of Chicago Writing Program. As part of their attempt to explain to new college students the major differences between good high school and college writing, Williams and McEnerney define argument not as “wrangling” but as “a serious and focused conversation among people who are intensely interested in getting to the bottom of things cooperatively”:

To reinforce this point, there is sidebar in which “persuasion” and “argument are contrasted:

“Argument” and “Persuasion”

When writing to persuade, writers employ a variety of persuasive strategies. One common strategy is an appeal to the credibility, character, or authority of the writer (or speaker). When writers establish that they are knowledgeable and trustworthy, audiences are more likely to believe what they say. Another is an appeal to the audience’s self-interest, sense of identity, or emotions, any of which can sway an audience. A logical argument, on the other hand, convinces the audience because of the perceived merit and reasonableness of the claims and proofs offered rather than either the emotions the writing evokes in the audience or the character or credentials of the writer. The Standards place special emphasis on writing logical arguments as a particularly important form of college- and career-ready writing.

This has major ramifications for how teachers teach writing in which claims and evidence are advanced. That means that such time-honored rhetorical moves as cherry-picking the data to supports one’s views in a History or Science paper can no longer be acceptable. All key counter-evidence and counter-argument must be addressed. (See one process for argument analysis.)

Why This Is A Problem

Thus, a big error in the grade-level writing standards IMHO. This therefore leads me to claim that the specific grade-level standards below 6th grade in writing standard #1 are arbitrary and unwise.

Here is the anchor standard:

1. Write arguments to support claims in an analysis of substantive topics or texts, using valid reasoning and relevant and sufficient evidence.

However, for grades 4, 5 and 6, the following is given as the first writing standard:

1. Write opinion pieces on topics or texts, supporting a point of view with reasons and information.

Why not demand an argument? Why not expect a consideration, not matter how unsophisticated, of counter-evidence and counter-argument? There is no developmental reason not to do so. More to the point, why make the students and teachers think that merely offering opinions with a bit of (cherry-picked) evidence is satisfactory evidence for meeting the writing standard – only to be undone in Grade 6? Why wouldn’t the Anchor Standard be used across all grades? At the very least it is a clear warning: do not just read grade level standards!

Anyone who has ever worked on Standards Committees as I have knows that the grade-level descriptors are rife with arbitrariness as sub-committees try to pick verbs and adverbs that somehow progress each Standard from grade to grade. But many of these distinctions are without justification (ironically). And in the case of ELA it is superfluous: the Anchor Standards are precisely as they are named – they provide an Anchor for all grade levels. Each and every grade, where it can, should have the same standard, therefore. Then, all that would differ would be our expectations in terms of argument precision, thoroughness, and excellence over time. Put concretely, all that would change from year to year would be the anchor papers.

I’m just saying in a nice way what David Coleman famously said on video: in the real world no one gives a sh*t about your opinion. In the narrow sense of “mere” opinion without thorough support and argument, he is assuredly correct (even though, out of context, it sounds snarky and harsh.)

Why Does This Matter?

I thought readers would find it interesting and helpful to see a Freshman College writing assignment that reflects the key reading and writing standards on argument. As luck would have it, my son has gone back to college after a 5 year hiatus and he is in a Freshman writing course at Columbia. The attachments below come from his current writing assignment:

  • the writing prompt
  • the two readings the writing must discuss
  • one of the two exemplary papers provided as models

This set provides a crystal-clear example of what college professors expect vis a vis these standards.

Conversation Essay Prompt

Sontag – Looking at War (reading)

Nussbaum – Compassion and Terror (reading)

Conversation – Bressman – Fighting Indifference (model)

The broader lesson here is that the Standards mean very little without knowing the level of rigor in the expectations that are expected. Rigor is established not by the teaching but the assessment: the rigor of the task, the rigor of the models and rubric, and the standard set by the model papers. Faculties that spend all their time on thinking about instruction vis a vis the Standards will be missing the whole point of what a Standard is. It specifies outcomes, not inputs. (For a great resource that contains many sample assessment and papers, see David Conley’s book College Knowledge.)

A shout-out to my wife, Denise, who two years ago did a close reading of the Standards and caught this distinction between argument and persuasion that had heretofore gone unnoticed since the key text (quoted) above is in the Appendix, not the Standards themselves.

Image attribution flickr user noneedtoargue; The Definition Of An Academic Argument; This article first appeared on Grant’s personal blog; Grant can be found on twitter here;