Thursday, October 20, 2011

Helping Students Give Feedback

Recently, Frank posted a picture of how his students correct their own tests and John posted a sample of the feedback his own students give themselves. This provoked a lot of discussion among my online colleagues and one thing that came up was how it is difficult to get students to leave good feedback.

I noted in the comments of John's post that I found it really interesting how his students were mainly leaving reminder notes and questions for him to answer. In this case, I'm talking about feedback in a traditional teacher-directed sense.

This is something I've really been focusing on this year. It's definitely been rough. I attribute this to the lack of quality feedback students normally get from us but that's another story. I wish I was friends with more English teachers because this is their bread and butter.

The whole thing is pretty standard but as usual, I buried my big insight. It's Key Point #2 if you want to skip ahead.

First, I narrowed the scope. I tried to teach them to leave feedback for only one type of question. In the topic on Atoms, we focused on the explanation questions—how you can explain different phenomena through the motion of atoms.

Second, I went through the process of how I'd evaluate those question types and wrote out a flow chart. It looks like this but you can certainly just do it as a series of questions or a checklist.



feedbackflowchart



I drew it up on the board but you get it nice and typed. For the science folks, at the time we used the words atoms, molecules, and particles interchangeably.

Students are supposed to go through the flow chart and then write their feedback based on the chart. I gave them a sentence frame where each step they pass is a positive comment and when they hit a "no" they step back and write the improvement step.

For example, if you got to the second "no" you'd write something like, "You wrote an explanation for why water boils when heated. Next time your answer needs to mention atoms, molecules, or particles." or "You wrote an explanation for why water boils when heated and included atoms. Next time your answer should include how atoms move."

After that it was pretty standard. We used some generic sample answers to try as a class. Then they wrote peer/self feedback on some of their previous answers. I had them predict some things they hadn't seen yet, like sticking a balloon in a freezer and they traded and wrote peer feedback as well. Each time we wrote a set of explanation questions, we'd do this for at least one of the questions.

After a few tries at this I added another step in the flow chart at the end, "Does the explanation connect the motion of the molecules back to the observation or prediction?" This was much more difficult for kids to get than the first three but it is also a much harder skill.


Two key points:

  1. My real goal here is that they do this enough times and when they're writing their own explanations they have the flow chart running through their head. Is this an explanation? Did I talk about atoms? Did I talk about how atoms move? Did I explain how that's related to what I observed/predicted?
  2. What's really important is what's missing. I didn't have them leave feedback for correctness. This is where kids usually get hung up on feedback. If I don't know the answer myself, how can I give that kind of feedback? I can't tell where you went wrong if I don't know what the right move is. What I want them to look at is the quality of the explanation itself. Every kid can look at a written explanation and decide if it has certain qualities. It is important for a student to understand that he or she can be factually incorrect but can still give a quality explanation and vice versa. These are two different skills and we are going to improve both of these. This is the part I feel like I got right. Student feedback can't depend on the level of content knowledge. 



I'm giving this a tentative endorsement. The higher level feedback is still up to me but this has definitely helped move the lower and middle level responses up a notch.


One teacher-bonus I noticed from doing this is how dumb it is when I ask a kid to, "Check your answers." Yes, sometimes there are careless errors they can catch. Mostly, if they didn't know it when they answered the first time, they still don't know it. (Actual quote: "I checked it. I still don't get it.") I really need to do a better job of teaching my students different methods for verifying an answer.

Tuesday, October 18, 2011

The Best $5 You'll Spend This Week

John Spencer is having Reader Appreciation Week. You can get each of his books for $1 on Kindle. He's changed positions recently but I still think of him as a sixth grade teacher.

There's something for everyone. He's got a book with help for new teachers (Sustainable Start), YA lit (Drawn into Danger), edtech satire (Pencil Me In), and general thoughts on education (Sages and Lunatics & Teaching Unmasked).

For full disclosure, I consider him a friend in the "I've never met him face to face but interact him with online" category that most of my teacher-friends now fall into. However, he didn't ask me to do this and, in fact, wouldn't even consider asking.

Note that you don't actually have to own a Kindle to read them. You can get the apps for your mobile or download the reader software for your desktop or laptop.


(PS - I didn't forget about the assessment overview I promised in The Cycle but I have no idea how I'm going to translate this crazy bubble-flow-mind-timeline chart thing I drew into a blog post)

Friday, October 14, 2011

Test Deconstruction

I used to pre-test my students. There were two main purposes. I wanted to use it as a diagnostic to see what they already knew and I wanted to focus them on what was coming.

As a diagnostic, the test was pretty useless. All my students come in with essentially zero content knowledge of what we're going to learn. I mean, a few might be able to shout out a half-remembered vocabulary word, but I haven't had any students that can go beyond that. I get the information I really need from whatever I use to launch a topic. The ball and hoop demo for when we learn about atoms or just having them predict what will sink or float when we start on density/buoyancy.

In terms of focus, well, that didn't work so great either. They'd fail their way through the pre-test and since they had zero pre-exposure, none of what they saw on the test would stick. They didn't have anything to anchor it with.

Now I go with test deconstruction. In the cycle, this happens after they've run their own experiment and teased out the big ideas.

I give them a copy of a test I'm planning to give them. Same format. The questions aren't identical but they're testing the same standards. In their notebooks, they draw four columns. The first column is just the letter of the standard, which is listed next to the test question.

The second column is, "What do I need to do?" They should write what is actually required in the question. Do they need to label a diagram? Explain something? Draw a picture? Fill in the blank? They have a copy of Costa's questions in their notebooks to help them along. I picked up this page either at an AVID conference or somewhere online. Box.net doesn't like the font I used but you get the idea.



The third column is, "What do I need to know?" and the fourth is Key Vocabulary.

Obviously identifying the content knowledge and vocabulary required is an important reason for doing this, but there are two other things I'm trying to accomplish.

Number one, I want them to understand how fundamentally different this question is:

atomslg4.pages

from:

A. Draw and label an atom.

or:

B. Select from the word bank and label the diagram.

or:

C: List the subatomic particles of an atom.

In the first example they'll need to know the parts of an atom, where they are located, and what those little symbols might mean. They're not asked to do the actual drawing from memory as in example A (this comes later on when we start with the periodic table). In B, they're only asked to recognize, not memorize, the names and know the locations. In example C, they're only asked to remember the names but not the locations or know what those symbols are. However, they are required to know what "subatomic" means. These are different questions that will require different levels of knowledge and different skills. More importantly, they need to prepare for these differently. I can't just tell a failing kid to "study." They don't know how to study. It's something that needs to be taught.

Number two, student friendly learning goals! One of my rules for teaching is don't do the thinking for the students. If your learning goals are in teacher language ("Identify and label the subatomic particles of an atom") and you translate that for them, you're doing the thinking for them. I want my kids to know what "identify", "label" and "subatomic" mean. Why in the world would I translate that for them? The columns translate easily into a learning goal for each standard and now when I say, "Today we're going to work on identifying and labeling the subatomic particles of an atom", they've already got something written up to decode what that all means.1


They write them up in their science notebook next to that topic's table of contents. Whenever we add something to the TOC we can refer to the learning goals at the same time.


1: Now is not the time, but at some point you'll need to remind me to preach for judicious use of the learning goal.