Tuesday, December 22, 2009

Accepting defeat

Wired.com has a good series right now on failure. I've been stuck contemplating this article by Jonah Lehrer. He's written a couple really fascinating books on the brain. His piece is on how scientists deal, or don't deal, with failure. Here's a quote I've been kicking around in my head:

The lesson is that not all data is created equal in our mind’s eye: When it comes to interpreting our experiments, we see what we want to see and disregard the rest. The physics students, for instance, didn’t watch the video and wonder whether Galileo might be wrong. Instead, they put their trust in theory, tuning out whatever it couldn’t explain. Belief, in other words, is a kind of blindness.
What beliefs are blinding me?

I've been working on school reform of assessment practices for awhile now. I keep getting stuck on that belief that teachers aren't buying in because they're those kinds of teachers that resist any change or maybe they're just bad teachers. I interpret every response in that way. I need to take a step back and have those difficult conversations. To paraphrase Susan Brookhart, when I listen I need to stop evaluating and start interpreting. What is it that the teachers at my school really need? What is really holding them back?

Thursday, December 17, 2009

All Else is Never Equal

In a previous post I mentioned one of the problems with educational research. A post by Andrew Gelman brings up another problem. I'm actually quoting Andrew Gelman quoting himself here:
The "All Else Equal" Fallacy: Assuming that everything else is held constant, even when it's not gonna be.
There are many books that try to tell us what works in schools. I certainly try to use them as a guide. However, they make the all else equal fallacy by removing the results from the context. Nearly all educational research looks like this:
  1. Multiple choice pre-test
  2. Treatment Group vs. Control group
  3. Multiple choice post-test
  4. Look at mean, calculate effect size and decide what works
 We then get results in summaries or meta-analyses and are told to do these things because they work. We don't have the context. We're told that all else being equal this will work. But as Gelman says, they're assuming all else will be held constant even when it's not gonna be. What does this mean for us? I guess a few things.

We need to read the original sources as often as possible. While this is not always possible we should try to understand the context of the original experiment as much as we can.

Use research as a map but don't let it take control of the steering wheel. While certain strategies and systems might have a higher probability of working, you can't let it get in the way of what you see in your own classroom.

Conduct your own research. With a standards-based assessment system you are better able to use data to look at differences between your classes and between other classes at your school. In a traditional system all As, Bs, and Cs are not created equal. Even within your own class it's hard to tell why each student has each grade and don't even try to compare students between classes.

Thursday, December 10, 2009

It creeps into higher-ed too

Rhett Allain blogs about physics over at Dot Physics. It seems he's noticed some problems as well with what the purpose of grades are and what they communicate.

If you haven't checked out his blog you should.

I wouldn't be completely against a mish-mash final grade that included all sorts of weird things the instructor thought was important. But without exception, the student's level of learning attainment needs to be clearly communicated. Most schools that move to a completely standards-based report card have separated out academic and non-academic scores. A student would have a grade for their level of mastery in writing as well as behavior, participation, work completion, and other factors that the school deems important.

In my school, we could have a writing mechanics score as well as an ability to bring in kleenex score.

Thursday, December 3, 2009

The battles within

We're on the trimester system and our first one ended a couple weeks ago. I was looking over the grades for all my students. I'd have preferred if certain teachers just came up to me and slapped me in the face instead of unleashing these travesties:


These are two different teachers although they both teach history. If you can't see the examples, in the first one, the student received extra credit for bringing in Kleenex boxes. In the second, this student received extra credit for keeping her bathroom passes. You know already how I feel about this. Not coincidentally, teacher B is probably one of the biggest resistors to any assessment reform message my group tries to bring to the school.

So at my own school, in my own grade level, I have a teacher who allows students to purchase points and I have a teacher who rewards students for holding their bladder. Change starts at home.

Wednesday, December 2, 2009

I'm your father and you'll do it because I say so!

Remember when you were young and you'd get into an argument with your parents? I was a, let's say, special child in that regard. Eventually one or both of my parents would run out of reasons for doing something and resort to the "because I say so" argument. I have a vague recollection from Philosophy 10 that this is called argument from authority.

Points are our ultimate "because I say so." How many times have you as a student or teacher had this conversation:

Student: Why should I do this?
Teacher: Because it's worth X points.

When we don't do the work required to link our assignments with clear learning goals this is what we fall back on.

The key then to getting your students to complete their assignments is to create a clear link between "doing the work" and learning what you need to learn. Your new conversation should be:

Student: Why should I do this?
Teacher: Because it will help you learn X.

This is obviously a constant and ongoing conversation. I've mentioned it before in an older post but it's one of the crucial mindset shifts that needs to occur for your students to really buy in. You need to make their learning progress clear and relate it to the effort they've put in. Your classroom language is crucial here, which I'll post about in the future.

Here's one example I show my kids to help them link the work with the learning. I take a survey every trimester (I show last year's results in the beginning of the year). There are two questions:

  1. Think of everything you were asked to do in science this trimester, what percentage do you think you completed?
  2. What was your final trimester grade?
When I say everything I mean everything. I tell them to count exit slips, notes, turn to your partner types of activities, warm-ups, etc.,in addition to the traditional worksheets/lab stuff. I throw the results up on the screen the next day:

I remind them that I don't give them points for turning in work. What's going on in the graph is the students who are doing more work are learning more. I should probably print this graph out so I can just point to it every time the above conversation happens but paper has become quite the precious commodity at my school.

As a side note this is a good opportunity to do a mini-lesson on problems with self-report surveys. Generally, the A students tend to under-report how much work they've actually done. Some of them do it because they like to pretend they understood everything easily. Others don't count things like coming after school or getting help from friends during lunch. So while they may have only done 50% of everything in class, they more than made up for it with the extra time they spent outside of class that they don't think counts as work.

The F student results are also skewed because the super-duper F students don't even turn in the survey.  I've done this survey 7 times now and every time it ends up similar to that pattern above with A students doing the most, a nice even tapering by grade, and a huge drop off to F.