Showing posts with label mindfulSBG. Show all posts
Showing posts with label mindfulSBG. Show all posts

Thursday, June 7, 2012

Burden of Proof

Brian Frank has a post where he talks about standards-based grading and evidence.
In a grading system where you take away points, evidence of misunderstanding and lack of evidence for understanding are both punishable offenses.  Standards-based grading, however, focuses our attention to confirming evidence of understanding.
I've had some recent posts stating that Argument is one of the pillars of science education. Here's where we get back to alignment. As Brian points out, one of the tenets of SBG is that the burden of proof rests with the students. Again to quote Brian:
The students isn’t punished for not labeling things the way you want them to; they simply can’t be given credit for understanding things for which they have provided no evidence. Maybe they will show that evidence later by labeling forces the way you want; or maybe they will show you evidence of understanding in a different way.
If we believe that one of the fundamental goals of science education, and indeed all education, is to teach students how to argue, then your grading system should align with that value.

I left a comment on Brian's blog with a link to this paper called Faculty Grading of Quantitative Problems: A Mismatch Between Values and Practice. This is by no means a rigorous academic paper but it has some points that are worth sharing.
If students are graded in a way that places the burden of proof on the instructor (as 47% of the earth science and chemistry faculty did), they will likely receive more points if they do not expose much of their reasoning and allow the instructor to instead project his/her understanding onto the solution. On the other hand, if they are graded in a way that places the burden of proof on the student to either demonstrate his/her understanding or produce a scientific argument, they will receive very few points unless they show their reasoning. Most instructors tell students that they want to see reasoning in problem solutions, however students quickly learn about an instructor’s real orientation towards grading by comparing their graded assignments with those of their classmates, or by comparing their own grades from one assignment to the next.
I love this idea of burden of proof. If we place the burden on the teacher, we need to interpret what the student means and students are encouraged to leave out reasoning because that might end up deducting points. I'm reminded of Scott McCloud's concept of closure in comics. We end up filling in the blanks between panels.1

If we place the burden on the student, the answer is simple. Why do you need to label forces in this way? I don't know if you know it until you show me.









1: I'm talking reasoning and argument here. Don't even get me started on the massive equity issues students confront when we fill in the blanks.




______

I've also got another post up at ASCD Inservice. This one is based on a Robyn Jackson seminar and is a small modification I'm making on teaching compare and contrast. It's on increasing rigor and if I'm going to stay on topic I'd say students will need to understand and negotiate with what constitutes acceptable proof of understanding. That is, if all I do is give students pre-written tests, I've placed a ceiling on what my students understand of proof. (Full disclosure: I get paid for these posts)

Friday, June 17, 2011

Flow Control

I know. It's supposed to be all about my students. I'm supposed to say something about helping them focus on learning versus work completion or helping them learn to self-assess or whatever. Yeah. That's all true. But even if that wasn't true, even if standards-based grading was no better at that stuff than traditional grading, I'd still do it.

Why? Because I know of no better way to inform me about what I need to do next. I need to know what I can do to help get a kid from point A to point B. Getting 90% on a Chapter 14 quiz or a B+ on Worksheet 1.6 won't tell me that. I need to be able to tell a student, not that he's failing, but that while he gets how to calculate the average speed of an object, he's still struggling with graphing that motion and here's something that will help.

And I need to be able to do it quickly.

Pre-SBG, I'd have needed to open up a packet of work (that's assuming I still had it or that he still kept it), flipped through each page, and then prayed that some sort of recurring pattern jumped out at me. Even if by some miracle that worked, I'd have NEVER done that for every kid on my own. It's just too much work. I would wait until a student took the initiative to actually ask me what he or she was struggling with. I'm sure I justified it as "helping students take responsibility for their own learning." Because, you know, after a student has spent her whole life getting Fs in everything, my F is the magic one that bestows upon her the gift of knowing how to respond to failure. It's like the Triforce. Now that she's collected all of those Fs she can now wish herself into being an A student.

So here's my advice for those of you who are working on standards-based grading for the summer: As you're setting things up, look at each piece and you should ask yourself, "Do I know how to respond? If I look at this, can I determine what to do next?"

When you're setting things up (or revising them) think of everything you're doing as a bunch of If-Then statements. On an assessment, if this happens, then this should happen. In my gradebook, if I see this, then I should do this. The strength of standards-based grading isn't that it gives you better information, it's that it gives you better direction.


Bonus Power User tip: Take a single question, a full quiz, your Do Now or whatever. Write out a bunch of If-Then statements. Depending on the type of assessment it might look like this -  "If you miss #3, then..." or "If you get a 2, then..." or "If you answered 8 m/s, then....." Give the assessment, correct it in class right away in whatever manner you prefer, and then put up the If-Then statements on the board.

Friday, March 11, 2011

One Test to Rule Them All

Nearly 60 years ago, an anesthesiologist named Dr. Virginia Apgar devised a simple test that all parents are familiar with. The Apgar test looks at five criteria to quickly determine the health of a newborn. A healthy newborn will generally score between 7 and 10.

The story of Dr. Apgar's test is an important lesson.

On the face of it, the Apgar test is ludicrously simplistic. In theory, a baby could have no pulse but still score a perfectly healthy 8.

But the Apgar test is good enough. And more importantly, it's better than nothing. Because that's what doctors used to have.

In the book Better, my man crush Atul Gawande states, "the score turned an intangible and impressionistic concept—the condition of new babies—into numbers that people could collect and compare (p. 187)."


Basically, doctors now could tinker and gather the results to see what worked. There's a lovely passage about how doctors are supposed to be "evidence-based" but in obstetrics the doctors just tried stuff out and looked to see if results improved.

They had a number and if it went up, they knew they probably did something good. If it went down, then back to the drawing board. Before the Apgar test, 1 in 30 newborns died at birth. Now, it's 1 in 500 (p. 187).

but.....

Gawande also points out that the number of Cesareans has increased dramatically. In part, because of that all important Apgar score. He uses the phrase "tyranny to the score" here, which I love. Doctors, being the type A overachievers that they are, seek to maximize their score as much as possible.  Sometimes they do so at the expense of other considerations. Again, quoting, "While we rate the newborn child's health, the mother's pain and blood loss and length of recovering seem to count for little. We have no score for how the mother does....(p. 198)" Being the good teacher doctor that he is, he goes on to suggest multiple measures.

So what do I take from all this? I used to spend hours creating and revising the perfect assessment. I'd stress about word choice. I'd wonder if I was giving too much away or not enough info. I'd try for that perfect balance of academic vocab and accessible language.  I'd try to cram in 60 questions in 50 minutes or have them write a full lab report in complete silence.

Except that there aren't any perfect assessments. What's perfect for Student A is highly flawed for Student B. It does not exist. But there are certainly better and worse assessments.  So like the Apgar test, I try to make my assessments good enough. Multiple "pretty good" assessments give a more complete picture than any single "great" one possibly can.1 And NEVER EVER let any one score dictate everything.






1: Falling in love with a single assessment, lesson, lab, demo, whatever... is one of the cardinal sins that very good teachers make. I know I can become enamored with a lab and stop being critical of it and working to improve. I start to think it does more than it actually does. And yes, I try to work to keep my SBG love in check.


PS - I've gotten more out of the Atul Gawande's books than any other of the "not in education" gurus we end up reading, like Dan Pink, Malcolm Gladwell or Jim Collins. I keep meaning to blog about the chapter he wrote on cystic fibrosis care. Best chapter ever. Go to the library this weekend and read it. The chapter is titled The Bell Curve and it will burrow deep into your brain. 

 

Monday, July 12, 2010

The Foundation of Standards-Based Grading

Two separate digital events collide:
  1. On twitter, Russ Goerend asked Shawn, Matt, and me (and any other takers) to try to define standards-based grading in one tweet.
  2. Kate Nowak drops this on us for Riley's Virtual Conference on Soft Skills.
and produce this:

Standards-based grading is built on trust.

Your students must trust you. The number one question I and others get is wondering if students will still do homework or other classwork if it's not worth points. I can answer with 100% certainty the answer is yes. Yes they'll do whatever you ask them to do, but only if your students trust you. They're trusting that what you're giving them will help them reach their goal. It's not busy work. It's not assigned out of habit. It's meaningful and will help them get from A to B. They will do it because they believe it will help them learn. They must trust that you are helping them get there.

You must trust your students. Allow them to surprise you. Give them freedom. Allow them to fail but allow them to learn from those failures. If you don't trust your students, they will fail. If you believe they won't do it if you don't make it worth points, then they won't do it. Trust your students.

You must trust yourself. Deep in your heart, you've got to trust that what you're giving them will help them learn. Everything you do is to help them learn. If you don't believe that, they're not going to believe it either. You need to trust yourself because the first day of school you're going to give a speech like this:
Hi. My name is Mr. Buell. You're used to being told what to do. You're used to getting something for doing, rather than learning. You're used to being rewarded for compliance, rather than creativity. Get used to something different. I will make suggestions to help you learn. You may choose to take those and in fact, I recommend that you take them. But only you know who are truly are and how you learn best. And hopefully, by the end of the year, you will know yourself a little better.

It's scary. Points are a shield.  When you take away that shield all you're left with is the trust you have in yourself that you're doing what's right.

Go ahead and build your topics and design your assessments. Do all the manual work that needs to be done; but always remember, that it's all built on trust. That work comes first and foremost. Start with a strong foundation and build something that lasts.

Trust each other. Trust yourself.

The last word comes from a series of tweets by @PersidaB that I'm putting together:
Before you can do SBG, I believe you need a transformation in the classroom. Where what you ask them to do becomes an opportunity to learn rather than another piece of paper to "complete". It's a shift in purpose and philosophy. And requires teacher and student training to shift thinking in purpose of why they're in the classroom.

Edit: Ok, now Frank gets the last word. Fantastic post by one of the SBG Borg: http://fnoschese.wordpress.com/2010/07/13/sbg_and_trust 

Tuesday, July 6, 2010

Picard, not Data

Riley and Matt have told you before that it's a bad idea to average. Ken O'Connor will tell you that mindless number crunching is one of the cardinal sins of good assessment practice.1

But why?

Here's a story: I'm driving in my car. I check my odometer and I've just gone 25 miles. I check it again and I've now gone 50 miles total. I check again and I'm 75 miles away. I stop when I'm 100 miles away. If I take an average of each time I checked by mileage, I get 62.5 miles.

Totally unrelated story: I'm taking a test. The first time I get a 25%. I take it again and I get 50%. The next time I get 75%. Finally, I get 100%. If I take an average of each time I took that test, I get 62.5%

Learning is a journey. You cannot average different stages of the trip in any meaningful way.  Not only is it an inappropriate use of averaging but it sends the wrong message. It tells students that the 100% they got the last time was nothing more than experimental error. It dismisses the growth that has happened.

I usually disapprove of number crunching for grades in general. But I understand that some people are required to do it.

So when can you average?


Going back to my car trip: I stop my car and get out. I look at the odometer. I take a GPS reading. I check the road signs. I check my map. I've now got four different measurements for how far I am at this exact moment.

Multiple measures for standards-based grading are good. It is in fact a requirement that you take multiple and varied measurements in any good assessment system. Ideally these would all occur at the same time, but realistically they'd be within a few days of each other.

In this case, it is acceptable to average your results as long as you don't do it mindlessly. Not all assessments are created equal. I wouldn't even think of averaging my GPS results with the ones I got by using a ruler and a map.

If you have to average multiple assessments, they should meet two criteria:

  1. The assessments all need to be quality measures of the learning goal. A lab called "Measuring Motion" isn't a valid assessment of that learning goal just because its got it in the name. Check every assessment against your learning goals. Make sure you're assessing what you think you're assessing.
  2. The assessments all need to measure the same point in the learning progression. Usually this means temporal proximity. Don't average two assessments that occurred three weeks apart.
Here's where averaging gets really tricky:

The criteria must be evaluated on a per student basis. 

Assessments are not quality assessments for each and every student. The time span it takes to render an assessment obsolete varies by student. This relates directly to the statement by Chris Ludwig I quoted in my last post.

Your grades come from weighing the total body of evidence you've gathered against the standards you've set and communicated. Use averaging if it will help you make a better decision but don't let it make the decision for you.

To quote @johntspencer: "A simple glimpse at Star Trek reminds me that Data is meant to inform rather than drive." [source]


Data is useful. Data is good for advice. But Picard is the captain. Be the captain. Don't mindlessly average.




1: O'Connor says that if you must use mean, also take a look at median and mode to see if the mean is giving you a true picture of mastery.

Data image from: http://upload.wikimedia.org/wikipedia/en/0/09/DataTNG.jpg
Picard image from:http://upload.wikimedia.org/wikipedia/en/6/6d/JeanLucPicard.jpg 

Post publishing note: This was probably the first post all summer where I didn't link to Shawn's blog. I publish this, check my Reader.....and he also has a picture of Data! I swear, we're not the same person. He's much cooler than me. Literally. He curls in his backyard.

The Whole Darn Thing

This comes via @chrisludwig in the comment section of his own blog post.
....the more I read lately, the more I’m convinced that what we need is not standardized, objective grading systems but more subjective grading systems, those that allow the teacher to personalize assessment for each student and students to have a role in defining the assessments. This should be done, though, in the framework of high expectations and defined learning targets. I’m still new enough at this to be idealistic, but I think SBG is the way to allow this to happen.
I don't want to elaborate too much on this because I'm trying to peer pressure him into spinning it off into a separate post. All I want to say is that his comment captured everything I've tried to communicate in 50+ posts, but he did it in three sentences.

Go visit his blog. Follow him on twitter.

Friday, July 2, 2010

It's not the end, it's the beginning

Here's the gist: Your assessments are your starting point, not the finish line.

In the style of Gladwell or Pink, I'm going to spend the next 1000 words on something I just summarized in one sentence.

Pretest:
  1. There are 3 sheep, 4 goats, and 7 pigs on a boat. What color is the captain's hat?
  2. You're going on a field trip. Each bus holds 10 people. You've got 31 students going and 4 chaperones. How many buses do you need?
You've probably seen these questions before if you've read about mindless learning. A mindless learner might answer 14 to question one and 3.5 to question two. These are obviously wrong but notice the problem isn't a lack of basic math skills. Bad implementations of standards-based grading stem from the same problems. You might have all the stuff in place, but if you don't really get it, you're going to fail without realizing why.

The First Why: Why am I assessing so frequently?

If you think of a test as the thing you use to decide what to do next, standards-based grading makes a lot more sense. If you're still thinking of tests as something purely evaluative, you're going to feel like all you do is give your kids tests. I get that a lot from my teachers. When will I have time to teach stuff if I'm just testing all the time?

Yes, there are probably 15-20 minutes of "stop what you're doing and answer these questions" per week. I get that time back, and more, by using it to set the course for the rest of the periods, next day, or the rest of the week.

The old me would introduce something new. We'd work on it for a few days. Then I'd introduce the next new thing. Then the next new thing. Then I'd have a test on the last few weeks because, well, it's been a long time since my kids had a test. Then what did I do? I entered in grades. I'd be surprised by a few (good and bad) and then....move on. If some arbitrary amount of students didn't pass, I'd spend a day or two in front of the entire class "reviewing." Seriously. That's how I taught. I need to start drafting my own letter of apology.

Now? Sometimes I'll start with the new thing, sometimes I'll pretest.1 We get some feedback. I set up the next few days based on the results of the test. The non-intrusive assessment still takes place. I walk around. Give some feedback. Get some feedback. Adjust instruction again. Have a learning lab day. Then re-assess to see where we're going next.

If you look at the paragraph above, you'll notice the word feedback occurs three times while grading never enters the picture. Focus on feedback. Whenever possible, leave feedback but not grades or scores. I do spend time really breaking down certain assessments and I have been known to go all out with testing data. Most of the time, I'm simply looking to get and to give feedback. I get a lab report and take a look. I'll jot down a couple of specific pieces of feedback, including a next step for the student. The student can use my next step or choose their own. We get a chance to actually act on the feedback

Side note: One of the hidden benefits of standards-based grading is how much less time you'll spend "grading" papers. You're just looking for feedback. It's not this accounting game of going through and marking and tallying. You're also going to find yourself leaning really heavily on non-intrusive or only mildly intrusive forms of assessment. You'll ask questions as they're doing labs or working problems. You'll circle the room. You'll ask a question on a slide and choose your next slide based on the response. If you're worried about the paperwork that comes with standards-based grading, it's because you haven't changed your mindset yet.2

Teachers tend to worry about all sorts of technical details when it comes to standards-based grading. How will I input it into my gradebook? What should tests look like? How do I design my scales? That's important. But I'm going to freak you out a little here. That's the easy stuff.

The scariest part for me, BY FAR, was realizing that I might not know what I'm doing on Tuesday based on what happened on Monday. Take into consideration that I'm not an organized person. I don't write out my daily lesson plans and, despite being "required" to before I was tenured,  I've never actually submitted weekly lesson plans to my principal.  

So why am I assessing so frequently? Your assessments are the tools you use to help you move forward. The format is less important than what you do with them. You will like them. Your kids will like them. Ok, your kids will at the very least see the purpose of them. But if your assessments just go into this mystical gradebook and nothing ever happens to them, you've missed the point of standards-based grading. You're going through the motions and you're the kid who thinks 3.5 buses is a valid answer.

More mindful standards-based grading to come. Leave a comment if you'd like me to address something in the future. Here's a sentence starter, "I don't get why....."

Another last minute add! Twitter saves the day again. By @misscalcul8: Scroll to the bottom of this post for the words of wisdom from @PersidaB. Well said, Persida.




1: I plan on pretesting more this year. I didn't before because everything was new to my kids and felt it was just getting them discouraged. I've started a common assessment system this year and so we're going to pretest each unit, post test, then level the classes for a week. More on that when I actually, you know, figure it out.
2: I have now broken the record for "most times any blogger has linked to the same two other bloggers in consecutive posts."