Monday, January 2, 2012

Survey Results

Sorry I've been incommunicado lately. I've been working hard establishing that a single person can in fact get the flu three times in a single month. Now that I've figured that out, you don't have to! I'm a full service blogger. Also, my seven day beard doesn't give me a "rugged outdoorsy" look. More like "slice of wheat bread left in the refrigerator for two months" look. So it was a learningful December.

An artist's rendition of my chin.
http://upload.wikimedia.org/wikipedia/commons/e/eb/Mouldy_bread_alt.jpg


Here are the results of the survey I posted umm...like two months ago at this point.  I assumed 90+ = A, 80-89 = B, 70-79 = C, 60-69 = D, and 59 and below are Fs. (I should have clarified that ahead of time). If you entered in a letter grade I converted it using that scale and if you did a range I put the mean.


I don't know why I put F first on the graph. It's bothering me now to look at it. Plus I had to keep the next graph consistent. It's depressing me. Let's just proceed. The next graph is broken down a little more.



Scores ranged from 0 to 90. Both As were 90. There was a score of "butt" which I wasn't sure how to code and a certain blogger in the Rocky Mountain State left an answer down to the third decimal place.

What am I supposed to take from it? Well I'm interested in what you think. Comments are definitely wanted.

Take a second to think it over before I tell you a couple of my thoughts.
.
.
.
.
.
.
.
.

I've seen this done a few times and what I think it shows and what the presenters thought it showed were different.

You can't take this as an example of how a (10, 5, 4) point scale or Advanced/Proficient/etc is superior to a 100 point scale. That was the purpose the first time I saw this. Obviously we're going to agree on the results more if we're only given 4 choices. Hey you've got one choice now. We all agree! Moving on.

I also don't think this shows the "arbitrary nature of grading" quite in the way that this has been presented to me either. I'm supposed to look at these results and say, "Same test! Same knowledge shown! Different grades! If only we had a (rubric/checklist/Pearson sales representative)."  The whole point of the exercise was for you to make up your own scoring system. If you give us one, we'll agree more. If I had an actual test with actual student answers, then we could start that convo.


This, by the way, is something that's definitely worth doing with your department. Copy an actual test a kid has turned in. Don't mark it first, because it's also important to see what different people actually count as correct. Then talk about it.




What do I like about this? It's not so much the arbitrary nature of grades I think this gets at, but the very personal nature. I like that it helps you confront your own values. The very first time I did something similar (maybe 4 years ago so I'm going to fudge the numbers but the spirit is the same) and my thought process went like this:
Well it looks like this student mostly got it. I mean he got all the easy stuff. The super hard stuff he missed everything but I don't really expect him to always get stuff that I didn't directly teach. He maybe deserves about a B. So...hmm..I'll assign 10 points to the easy MCs, 5 points each to the short answers and like 5 points each to the hardest questions. That way they can get everything right except the hardest and still finish with a B. Plus I don't want the hard answers worth too much because those are kind of double jeopardy points. You miss a little of the easy and you're probably going to miss that part again on the hard part. So add it up and he gets an 85. Yeah that seems about right. I'll keep that.
I turned to the teacher at my table and shared my unassailable logic with him. His response:
I don't give a rat's ass if a kid can bubble in some memorized answers. If he can't think, he fails. 30 points each for the two hard answers. 1 point for the MCs and 5 points for the short answers. He got 35.
And you know what? We were both right. I was amazed at 1) How different our reasonings were 2) How little thought I had previously put into what a grade actually means to me and how it communicates your values.

Did you decide on a grade and go back and tweak points? Did you ignore the point totals entirely and just give a score? Did you assign totals, add them up, and just went with it? Did you add them up, decide you didn't like the final score, and go back and change things around until you did like it?

I am 100% guilty of having printed out a chapter test from whatever CD my textbook came with, giving it to my kids, started scoring them and thought, "Huh. I don't think that paper really deserved that score." ......and then doing absolutely nothing about it.  I enter the grade and move on.

Later, I'd think I was being "better" by reading into the answers and throwing them a few extra points when they need it.

I think I know what she meant. I mean, she contradicted herself three times but I guess I know where she was going. I can see how she'd think that. Plus, I know she probably could have got it, she just maybe ran out of time or was distracted by Jacob sitting next to her tapping his pencil the whole time. Jacob was driving me crazy with that. Plus she always works hard so I know she gets it. Maybe she just had an off day. hmm......  B+
Both of these are wrong. My problem with the first is that how I assessed and what I wanted grades to mean didn't align. The second was brutal for me to face up to. I've let her not learn something and told her she's learned it. Why was it so unthinkable for me to just ask for a clarification? Or to tell her she didn't get it and try again. Argh. This is giving me an eye twitch just thinking about it.

There's no right answer to the survey. The important thing is that there's a right answer for you. When you know why it's the right answer, you can go back into your classroom and make sure that everything aligns with that answer.

1 comment:

  1. One of my takeaways is the need to make sure students know what you're grading on. Because it is that both you and your fellow teacher are right, but poor kid who hands in the test not knowing who's going to grade it. Or who gets the A in your class and F in their class and doesn't know what they're doing differently.

    Couple of tangential anecdotes.

    My college had a writing portfolio requirement. At the end of sophomore year everyone has to turn in several papers they've written to make sure you're writing at the college level. I forget whether everyone gets two readers or if you only get two if your first reader wants to assign a grade other than pass. (Honors or fail.) I do know that I had two readers. One suggested I get honors. One suggested I fail.

    A student I'm TAing this year was CERTAIN he would fail the first class in our sequence. He got a B. I saw him today for the first time after break and he said again how he didn't deserve it. And he probably wouldn't have given himself that grade based on a traditional mastery rubric. But the professor had said after a surprisingly difficult midterm, "No one is going to Fail." And meant it. We knew he had learned a lot. (And we also know that his purposes for taking this course are different from the average student's.) In this case it's especially tricky because there wasn't a strict rubric. I assigned numerical grades to the midterm and the final exam. But we didn't say what those numbers meant in terms of letter grades. Why shouldn't an A be 90+ and a B be 50+? (Grade inflation? I don't understand... It really is a different world here.)

    ReplyDelete