Sunday, December 16, 2012

Guided Reciprocal Peer Questioning

One of my focuses for the year has been to develop better scientific discourse between students. My teacher-student interactions have progressed beyond Initiation-Response-Evaluation but I don't think my students have high quality conversations and I've been working on structuring more conversations.1

There are two areas I've been concentrating on - questioning and rebuttals. The first method I've been using is called Guided Reciprocal Peer Questioning from Alison King. I've tried it a few times and have tinkered a bit as I've gone. The basic routine is:

  1. Learn about something. 
  2. Provide generic question frames.
  3. Students generate questions individually.
  4. Students discuss the questions in their groups. 
  5. Share out.

I really like the question frames. This example is from a different article and lists the type of thinking the question is designed to promote.


For my last unit (about construction in earthquake zones) my students got a list of fifteen of the question frames. They used the frames to write two questions each using two different frames. I give them a chance to read the frames over first and ask questions about any of the frames that are unclear. They take turns asking and answering the questions in their groups. I had them record the best answer which I defined as the consensus of the group rather than the best individual answer.

I didn't want to overly structure the discussion by forcing turn taking but I also didn't want "everyone turn and wait for the 4.0 student to answer." This time I had students keep track using tally marks at the bottom of their sheet whenever a group member made a "positive contribution." It didn't have to be correct but it had to be something that moved the conversation forward. I tried to emphasize the goal being equity and the questioner had a responsibility to directly include another student in the conversation if he/she noticed that student had contributed little. I had mixed results. I noticed a few of the quieter kids brought into the conversation. I also noticed my achievers racing to "win" the discussion points. I'll continue to play with this part.

I picked a random group member to share out a Deeper Understanding and a Further Investigation. As I get better at this, I hope to turn the Further Investigation questions into individual research projects.

Overall I like the process. There were some good breakthroughs in the discussions and I like the practice of having students generate questions. In the future I might try requiring group members to all select different frames. From what I've read, the original method was intended to be used after every lecture so it became part of the routine. I haven't made it a regular part of the routine but I've been happy with their progress. The best thing I've seen so far is that in the beginning, my students would write questions that they could answer themselves. They treated it like they were testing their group members. I've seen a gradual shift to actual discussion where students are asking things they're genuinely confused or curious about.





1: I do think this is largely part of the culture of a school. Students are usually taught to look to the teacher for answers.

Sunday, September 9, 2012

Three Weeks In

It's been so long since I've posted I now qualify for the blogging initiation. I'm writing just to get in the habit again so lower your expectations appropriately. It's a new school year and there are many changes.


1. I'm teaching 6th grade for the first time. I have three classes of 8th and three of 6th. Slightly fewer kids than last year (from 194 to 186). They are tiny and fun but earth science is still a huge struggle for me. I'm constantly banging my head figuring out how to actually run experiments in class and not just "build a model" activities. I have that new teacher shine again where I can't think ahead more than a few days in advance so even if I can figure out something I likely don't have the materials available. The previous teachers, while very capable, were more traditional "hands-on" types. All you need to know is I have an entire shelf full of Model Magic.

2. SBAR is both dead and alive. I have a new principal and AP and didn't get to talk to them before school started. Like many of you out there I hybridized what I could. I've got four categories now. Content Knowledge (50%), Inquiry (23%), Argument (20%), and Pillars (7%). Content Knowledge is described as my "Show me what you know" category. For most kids this is quizzes. This is basically standards-based. Inquiry is labs and lab design. This is assignment-based. On the plus side, I can still make the grade based on inquiry standards, but it won't be designed as a few basic standards the students are improving on as the year goes. Argument are mostly CER and ADI types of things. It's the same as Inquiry though where it will be assignment-based but I can grade on the standard.1 Pillars is the school non-academic/character building/thing that goes on a poster on all of our doors category and is as tiny a percentage as I felt I could reasonably make it. I don't like it myself but I don't have a problem with people including non-academic categories in their grading (as long as it's explicit). I just spend zero time teaching those kinds of things beyond lip service and what's required and don't feel like it's fair for me to grade students on anything I'm not explicitly teaching in class. The good news about SBAR is the high schools we feed into switched over and so I think it's only a matter of time before it trickles down to us.


3. Shout out to prediction graphs. I've been pretty religious about getting students to make model-based predictions before doing an experiment. But it has mainly been in the form of sentence frames. My early introductions are spoon fed. "If the production model is correct, then the mass of the alka seltzer and water will increase/decrease/stay the same." I now have students include prediction graphs on their whiteboards in the graph section and I can't even believe the difference. Visually matching the prediction graph to the graph of the actual result is so much more powerful. I've had far less instances of students ignoring their evidence in favor of their own preconceptions.






1: I don't know if the distinction made sense since it's almost midnight. Think of them as exact opposites. With standards-based I have Standard 1 and apply Assignment 1, 2, 3, 4 to it. With assignment-based I have Assignment 1 and apply Standard 1, 2, 3, 4 to that assignment. Assignment-based is what Pearson seems to think standards-based is because my PowerSchool says it's "standards-based" but it definitely is not. If anyone from Pearson is reading this, I should be creating standards and adding assignments to it, not the other way around.

Wednesday, July 18, 2012

Breadcrumbs

Question: When a student encounters a novel problem, whose voice does she hear in her head?




Possible sources of discussion: This post by Grace and comment by Brian. This teaching video and article sent to me by @mpershan. Google giving my district two million dollars for Explicit Direct Instruction. Chapter 4 in the book Jenny and I are reading. Khan Academy.

Tuesday, July 10, 2012

More to Share

1. I've got two posts for my paid gig at ASCD up:


The first is about defining specific criteria for what argument in the class would look like:

Defining Your Own Best Practices

I liked this session more than I thought I would. Pete Hall had a good mindset about the term "best practices" as something informed by research but defined at the local level. I also liked his thoughts about defining them in "meticulous detail." I don't know how many staff meetings I've sat through where we talk about "formative assessment" and everyone has a different idea of what that looks like.


The second is a routine for using metaphors in the classroom:

Linking Prior Knowledge with New Content with Metaphors

I like Rick Wormeli's books. They're worth reading. However I found this presentation to be too broad and I didn't like his presentation style (think genie in Aladdin). There were a lot of good ideas but I would definitely recommend his books over going to see him live.



2. Jenny Orr, who blogs at Elementary, My Dear, Or Far From It, is one of my very favorite elementary teacher bloggers. We're both reading the Peter H. Johnston books this summer. We're going to read and discuss the books on her blog. Please please please join us. The first book is Choice Words. We're doing the first two chapters this Saturday and then the next 3 each week until we're done.

We're going to read Opening Minds after that. So again, join us! Both books are quick reads and well worth your Amazon gift card dollars.




3. Math Mistakes is a nice project from @mpershan. From the site:
Teachers need to be able to quickly look at mathematical work and identify the assumptions behind the work, and what actions to take in response to the work. That's hard. But practice can help us get better at this.
I love this idea. Go contribute.

(Yet another update: The very wonderful Kelly O'Shea has started Physics Mistakes.)



4. Math Munch is a weekly collection of fun math related things from around the web.



5. Finally, just to remind you that I actually do teach science, the sublime XKCD has started What If? which answers such wonderful questions as, "What would happen if you hit at baseball pitched at 90% the speed of light?"

[updated]

6. I forgot to include the whole reason I started this post. This Is What A Scientist Looks Like is "a project developed to challenge the the stereotypical perception of a scientist." This is the beauty of living in a connected world.

[updated again]

7. #Made4Math is a blog meme going around where teachers make different stuff to use for their class, like storage containers out of pringle cans. It's the exact opposite of anything I'd ever do but I'm impressed and maybe a little jealous.

Monday, June 18, 2012

Life in the Gray

Bowman asked the blogosphere to write letters to new teachers. Sophie Germain wrote one already. (Edit: Here's the collection) I wrote a post last year with practical tips. Here's my advice.


Dear Noobs,

This is what you need to know:
Despite what teacher movies, books you've read, your credential program, your master teacher, your new principal, your BTSA coach, and the blogosphere would have you believe; there is no black and white in teaching. Teaching is one huge gray area
I had to bold that. We make it seem like it's a noble profession full of clear choices and goals. You want every kid to succeed and feel safe. You have high expectations. You set big goals. You put your students first. NCLB bad. Diane Ravitch good. Worksheets and grades and rewards and punishments are tools of the lazy and the incompetent. You will never, ever give up on anyone.

That's all bullshit.

Look. I know. You're brand new and you just watched Waiting for Superman or Race to Nowhere or read The Schools Our Children Deserve or watched Sal Khan's TED talk and you are PUMPED. You are going to be an amazing teacher. Your kids will gasp with joy and be lost in wonder and LEARN LEARN LEARN. More importantly, you're not going to be that teacher. The one next door that goes home at 3 and gives out the same worksheets as he did 10 years ago.

Sorry. I know this won't mean much to you right now, but it doesn't work that way. In teaching, there is no clear path. There is no choice that is always right or always wrong.

There will be a time when your students are hurt by your high expectations and your big goals. Especially when you're new. You just won't have developed the skills yet to help your students meet all of your goals. It's true. There will be a time when you have actually damaged a kid because you set a high goal for them that you just weren't good enough to help them reach.

There will be a time when you can't keep your kids safe. You hear about a fight that's going to happen at lunch. You should report it right? What if a lunch time fight is something that can be managed? What if it happens, gets broken up, and everything is fine after? What if avoiding a lunch time fight means they'll now fight after school? Only this time they will bring others. And weapons. And nobody is around to stop the fight when one kid is on the ground and ten others are kicking him.

There will be many, many times when you shouldn't put your students first. Go home. Go to sleep. Get a massage. Take a day off. Spend time with your family. Play a video. Ask that teacher next door for a worksheet. Often, what's good for you is good for them. You should do what's good for you. Sometimes what's good for you isn't good for them. Sometimes you should still do it.

There will be a time when that teacher next door has the perfect piece of advice for your current problem but you blow him off because he's that teacher next door.

There will be a time when embarrassing a kid in front of his peers is the best thing you could have ever done for him.

There will be a time when you just. don't. care. about a topic you're teaching.

There will be a time when you use stickers and candy and points and extra credit and it will work wonderfully.

There will be a time when you need to kick a kid out. You've got 33 others. Don't make them suffer for the one.

There will be a time when you give up on a kid. You hate it every time but you've got 193 other students that haven't given up and you can only do so much.

There will be a time when you have a student that you love like your own but you need to recommend to the school board that this student be expelled. Because no matter what he means to you, he is a danger to the rest of the students.

There will be a time when you look a student right in the eye and lie to her.

You can't worry about always making the right choice or always making the wrong choice. At best we can deal in probabilities. Often what we think are our best and worst choices were the result of nothing more than chance. 


Sometimes you make the choice that you can live with. Sometimes you make the choice that you can't. 


Teaching is a human endeavor. It's messy and complicated and the best job in the world. There is no black and white in teaching. Only gray.

Thursday, June 7, 2012

Burden of Proof

Brian Frank has a post where he talks about standards-based grading and evidence.
In a grading system where you take away points, evidence of misunderstanding and lack of evidence for understanding are both punishable offenses.  Standards-based grading, however, focuses our attention to confirming evidence of understanding.
I've had some recent posts stating that Argument is one of the pillars of science education. Here's where we get back to alignment. As Brian points out, one of the tenets of SBG is that the burden of proof rests with the students. Again to quote Brian:
The students isn’t punished for not labeling things the way you want them to; they simply can’t be given credit for understanding things for which they have provided no evidence. Maybe they will show that evidence later by labeling forces the way you want; or maybe they will show you evidence of understanding in a different way.
If we believe that one of the fundamental goals of science education, and indeed all education, is to teach students how to argue, then your grading system should align with that value.

I left a comment on Brian's blog with a link to this paper called Faculty Grading of Quantitative Problems: A Mismatch Between Values and Practice. This is by no means a rigorous academic paper but it has some points that are worth sharing.
If students are graded in a way that places the burden of proof on the instructor (as 47% of the earth science and chemistry faculty did), they will likely receive more points if they do not expose much of their reasoning and allow the instructor to instead project his/her understanding onto the solution. On the other hand, if they are graded in a way that places the burden of proof on the student to either demonstrate his/her understanding or produce a scientific argument, they will receive very few points unless they show their reasoning. Most instructors tell students that they want to see reasoning in problem solutions, however students quickly learn about an instructor’s real orientation towards grading by comparing their graded assignments with those of their classmates, or by comparing their own grades from one assignment to the next.
I love this idea of burden of proof. If we place the burden on the teacher, we need to interpret what the student means and students are encouraged to leave out reasoning because that might end up deducting points. I'm reminded of Scott McCloud's concept of closure in comics. We end up filling in the blanks between panels.1

If we place the burden on the student, the answer is simple. Why do you need to label forces in this way? I don't know if you know it until you show me.









1: I'm talking reasoning and argument here. Don't even get me started on the massive equity issues students confront when we fill in the blanks.




______

I've also got another post up at ASCD Inservice. This one is based on a Robyn Jackson seminar and is a small modification I'm making on teaching compare and contrast. It's on increasing rigor and if I'm going to stay on topic I'd say students will need to understand and negotiate with what constitutes acceptable proof of understanding. That is, if all I do is give students pre-written tests, I've placed a ceiling on what my students understand of proof. (Full disclosure: I get paid for these posts)

Thursday, May 31, 2012

Sharing is Caring

Three things to share:

1. Twitter Math Camp is July 19-22 in St. Louis. It's exactly what it sounds like. Some math teachers on twitter decided to get together, work on some math problems, and teach each other all sorts of stuff. It's free, small, and low key.  Think edcamp rather than NCTM.

2. Math52 is a Kickstarter from Mathalicious. If you pledge $52 you get an entire year's membership to Mathalicious.

3. Frank Noschese has a TEDxNYED talk up called Learning Science by Doing Science.


One request:
1. My schedule for next year is half 8th grade physical science (what I have always have taught) and half 6th grade (ack!) earth science (double ack!).


The California standards for sixth grade (page 27) are all earth and no space. I'm not too worried about the day to day (unless of course you have a killer lesson/unit you're dying to share) but more about the approach. I'm not sure how to lay this whole thing out so it flows together. My natural inclination is to build everything around the concepts of systems and cycles but really I don't know. In the pre-blogotwittersphere era I would have needed to teach the whole year to figure out how to fit everything together. I'm hoping to shortcut that process using all of you. If it helps, this is a non-tested subject so I have much more freedom to emphasize/de-emphasize certain parts of the curriculum.


  • What do I build the course around? What do I keep coming back to? 
  • I'd appreciate any recommendations for resources, curriculums you like, textbooks (gasp!) that do it right, etc? Anything and everything is welcome. 
  • If you teach this subject/grade level, what are the sticking points? Big misconceptions? 
  • If you teach HS earth, what is something, content-wise, that you wish students had a better understanding of?
Thanks for the help. 



Wednesday, May 23, 2012

The Purposeful Classroom

I'm writing a series about the different sessions at the ASCD virtual conference. My notes on last year's conference are here.

Full disclosure: ASCD is paying me for the posts but I paid for the conference myself. I'm coming out ahead in this transaction but the lifetime flow of money is still in their favor.

The first post is on Doug Fisher and Nancy Frey's Purposeful Classroom. Fun times. If you're a regular reader you will appreciate how difficult it was for me to keep my word count down.

Additional notes:

Fisher says they made a shift from "objectives" to "purpose"in order to include the idea of relevance. He says that relevance can be in three areas. Learning that has use outside of the classroom walls. Learning that gives a student opportunities to learn about oneself as a learner. Learning that is necessary for citizens in a democratic society. It's an interesting lens. I haven't thought about relevance enough to decide what I'd add, subtract, or modify from that list.

Fisher also commented on the Gradual Release of Responsibility model. He said the number one clarification he'd like to make is that GRR isn't meant to be a step-by-step recipe. You don't have to do it in exact order, just that the phases of GRR should all occur during a lesson. He doesn't make it clear in the talk but a lesson in this case doesn't necessarily mean one class period. This is an incredibly important point and was lost on the people who lead the GRR training the first time I heard about it.

Another important point was the difference between independent practice at home and in the classroom. He said that we give homework too early in the learning cycle. Students aren't ready to immediately apply homework.

They also shared two rubrics I thought were interesting starting points. A modeling and purpose rubric and a rubric showing indicators of success at establishing purpose.



Last: I've heard criticism of writers like Fisher and Frey for not being "transformational" enough. I get that. They're not. I classify them as "Better Now" types. They want to help teachers to improve what we're doing right now as opposed to razing the whole school system and starting over. Both types of writers have their place but I spend much more time reading and learning from the Better Nows. It's fun to think about what my ideal school would look like but I'm far more concerned with helping the kids that are in front of me each day.

Thursday, May 17, 2012

Argumentation part 2



continued from part 1

Three suggestions:


1. I stole whiteboard round robins from Argument-Driven Inquiry (there are some good resources here. Try the first link under Papers for an overview). Instead of doing whole class discussion around group results, I have kids set up their whiteboards. One person stays behind while the rest of the group rotates from table to table to hear what each group found out. The roaming kids have a paper where they record some of the basic details from the other groups and it asks them to evaluate and respond to what they hear. Sorry. I wanted to post an example but I seriously can't find a single one. I don't know why. I know in the past I've asked them to comment on whether this confirms or contradicts their own findings, the quality of the experimental design, and suggestions for what might make their argument more convincing. We struggle with evaluating the quality of the argument. It's an ongoing thing.

After returning to their tables they are given the chance to revise their claim or reasoning and if I'm really on it that week, they can design a follow-up. The ADI folks suggest a presider to summarize the findings of the class. I've never tried that.

In hindsight, I think it would be kind of awesome to have students cite other groups in their write-ups just like in a scientific paper. "According to Lopez, Lee, and Silva........."or maybe just have them cite and end note. Or include a section in their CER papers for rebuttals where they specifically attempt to rebut a different group. Hmmm. Something for me to try next year.

2. ADI also has a nice double-blind peer review suggestion. I do standard peer review but haven't added the double-blind aspect. I think that could be interesting. There's a long template here and on page 5 here (both links from here). I think those are a bit unwieldy for MS or even HS kids. For their chemistry final I had my students attempt to explain the candle and flask demo. For peer review we used this:

I tailored it based on what my students specifically have trouble with. It was helpful but not amazing. Next year I'd add something more specifically aimed at persuasiveness (see? We're good with inquiry and content knowledge. Bad with argument). Also, like any peer review, if everyone was weak at something we didn't add a lot of value by peer reviewing. We have a lot of trouble with logical consistency. For example, saying that the heat made the air spread but then later arguing that the water was cold so that's why it spread into the flask. I think having students flow chart their arguments might help but this is something I struggle with. Any suggestions would be great.

3. Conditional language is critical. Once I, as the teacher and scientific authority, tell the students that matter is conserved in a chemical reaction their brains shut down. It sounds dumb and too New Age-y for me and I don't have any hard evidence to back this up. All I know is the longer I stick with "might be" and "could" or even "probably" I can see my students still working hard to convince me that they're correct. (The downside is I had a frustrated student blow up at me this year because we "never settle anything" and so he's "not sure what's right and what's wrong." There's a balance I'm still working to find.)





--------------
Brian left a comment pointing to another paper arguing that we need to make a clear distinction between argument and explanation. I agree to the point where we need to understand that giving opportunities to explain is not the same as giving opportunities to argue. I'm less sure how important it is to teach to students the semantic difference. To clarify, I'm sure it's important, I'm just not sure what it's more important than and what I would then cut out.

Claim Evidence Reasoning - Argumentation

Theoretical in part 1. Practical in part 2.


In Making sense of argumentation and explanation by Berland and Reiser, the authors argue that scientific explanations have three purposes: 1. sensemaking, 2. articulating, 3. persuading.

They summarize these nicely as constructing arguments, presenting arguments, and debating arguments.

I like these. A lot. The authors don't treat these three purposes as separate domains but argue that each of these serve to strengthen the other.

Other than being really interesting, why is this relevant? From page 31:
We suggest that viewing student work in terms of these three instructional goals can clarify students' successes and challenges in constructing and defining scientific explanations and consequently inform the design of supports for this practice......we suggest that each aspect of the practice may require different types of support for students.
This has very broad implications about everything from assessment to scaffolds to how to structure the entire class. The authors studied the CER framework and decided it was good for purpose 1 and 2 but not so much for 3. My personal experience backs that up. From the abstract:
Through this analysis, we find that students consistently use evidence to make sense of phenomenon and articulate those understandings but they do not consistently attend to the third goal of persuading others of their understandings. Examining the third goal more closely reveals that persuading others of an understanding requires social interactions that are often inhibited by traditional classroom interactions.

If someone were to ask me what the three legged stool of science education is, as of May 2012 at least, I'd say content knowledge, inquiry, and argument.  You can't have a complete science education without all three.

In science education, argument is our weakest area. I know for me, it wasn't even something I thought about until last year.

This is my way of qualifying any suggestions I have. I'm still new at this. I don't have a lot to offer.

What I can tell you is that if you want students to engage in argumentation, you need to give them something to argue about. Sounds obvious right? But if I explain a topic, then we do a confirmation lab, and then I expect students to engage in argument about that topic, I'm setting myself up for all sorts of disappointment. I haven't given them anything to argue about. I've just given them an opportunity to show me how well they've memorized what I've said.


I've got three other suggestions which I'll break into the next post.

Wednesday, April 11, 2012

Claim Evidence Reasoning - More on Reasoning

Mylene asked about helping with the reasoning part of Claim, Evidence, Reasoning. (Mylene - get on twitter already. We could just talk about it.) Here's the relevant snip:
One thing I'm really struggling with is the concept of "logic" or a conclusion following from its premises. It's hard for my students to understand what I mean by this and it's hard for me to explain in other terms. So far, it appears that their definition of "logical" is something along the lines of "familiar" or "what I was expecting." Any suggestions? How do you handle reasoning that is preposterous or that makes leaps of faith?
This is definitely the hard part. I don't have any magic bullets, but I can tell you where I am right now.


1. Faulty reasoning is most often due to a lack of content knowledge. At the beginning and middle of a unit, this is expected. Honing reasoning as we go is one of our primary goals.

If things aren't going well though, my best advice would be to narrow the question. Broader questions are usually more complex. I started conservation of mass with, "What happens to atoms in a chemical reaction?" (Actually, the question was about why ice melts when you heat it but paper burns) and kids were flying all over the place. They generated their own claim and designed their own experiments and it was a disaster. There was no real way to differentiate through experiment most of their claims (atoms are being fused together, atoms are exploding, atoms turn into heat). I rebooted with whether atoms are destroyed or not in a chemical reaction and it worked much better. We all did a similar experiment and students were able to make a well-reasoned argument that atoms weren't destroyed.

The counterpoint is that I start other topics, like kinetic molecular theory, very broadly. I can't say for sure why some topics need to be narrower but it is some combination of how much background knowledge students come in with, how well our classroom experimentation can differentiate between competing claims, and how much time I'm willing to devote to testing competing ideas.

2. There are mechanical issues. For MS kids, sentence frames and starters go a long way. Be sure to fade them as you go. Another teacher at my school has a lot of success with sentence starters in group discourse. He posts them at his tables. Instead of CER, some teachers like C-ER-ER-ER where you give a reason after each piece of evidence.

3. Ask students to explain ahead of time what different results would mean with regard to the claim. Before they could start the experiment to test if atoms are destroyed or not, my students needed to explain to me that if the bottle got lighter, it meant atoms are being destroyed. If it weighed the same, they weren't. If it weighed more, something else was going on that we couldn't explain (this is an important and often overlooked step). In essence, I was locking them in to a reasoning. Is that preventing them from doing any thinking? I say no because I'm still asking my students to reason, it's just the timing that changes. If I do it after, students more often construct weird explanations for the results and engage in all kinds of magical thinking and confirmation bias. This is assuming the purpose of a lab is to test a specific claim. Sometimes a lab is just to get ideas rolling in which case we do our reasoning after. [Edit: Have your students sketch a "prediction graph" on their whiteboards ahead of time.]

4. Distinguish between reasoning and introducing a new claim. It is hard for students to see when their reasoning is backing up a claim and when they're making a brand new assertion. I don't have a good method for this other than continually asking them to go back to what their claim actually predicts. It's also important to help them with the idea that a single experiment doesn't have to explain everything. We burn paper and weigh it and it weighs the same. That only tells us that the atoms aren't being destroyed. That means we often need to use multiple pieces of evidence to converge on a single, more complex, explanation.

5. Differentiate between logical/scientific reasoning and reasoning from evidence. I think this idea came from a paper by Deanna Kuhn2 but I'm not sure. The part I really latched onto was that students come into experiments in science class with the idea that something has to covary. We're measuring this and manipulating that so clearly there's a relationship between the two. Students aren't asked to actually look at the data created but simply to reason about the science involved. In hindsight, this is certainly true in most of my classes. I run some kind of amazing science lab where the null hypothesis is always rejected. My takeaway is that occasionally students need to be doing experiments where no relationship will actually be found. I've gotten better at this both in intentionally designing them in and allowing students to run an experiment that I know will result in no relationship. The first few attempts at this what do I find? Students can ALWAYS find a relationship. It's a long slow process of breaking this habit.

The other insight I got from that paper was that students don't really understand measurement uncertainty. I definitely find that to be true. I like Geoff's approach but I don't really do anything about this other than hate how I don't do anything about it.


This is getting long. I'm going to stop here. I've got two more posts on CER in my drafts but history has shown I'm awful at following up on promised posts.




-------

1: The flipside is the argument that more attention to persuasion will lead to better arguments. I'll write more on this next time. (EDIT: This footnote doesn't go anywhere but I'll keep it to remind me.)
2: I hadn't heard of the Education for Thinking Project until I googled her for a link. Looks right up my alley.

Friday, April 6, 2012

Claim Evidence Reasoning

By far, the biggest shift in my teaching from year 1 to year 7 has been how much emphasis I now place on evaluating evidence and making evidence-based claims.

I blame inquiry. Not inquiry in the generalized, overloaded, science teaching approach sense. Just the word. "Inquiry."

Even now, when I hear the word "inquiry" I still think mainly of asking questions and designing experiments. A bad side effect of thinking in this way was that I would spend far too long having students ask questions and design experiments and very little time evaluating evidence and generating claims. On good days, when we miraculously got cleaned up before the bell, I might have spent 10 minutes at the end of class telling students what they were supposed to have figured out and students would answer questions like, "Explain how you know mass is conserved in a chemical reaction." (Answer: Because you just told me by asking that question.)

We were very busy and very engaged and learned very little.

There are a few structures I've been using to help shift the focus on the class to analysis and argument. One of them is the claim-evidence-reasoning framework. I'm not sure who the originator was but most of what I do came from Katherine McNeill who has published a ton on it. She wrote a book too.1


Claim-Evidence-Reasoning (pdf and pdf) is a framework for writing scientific explanations. Occasionally I use it as a probe in the style of Paige Keeley. After most labs, students are asked to write about a paragraph worth in this format and a more extended version when we wrap up a unit.

 As part of their lab handout they get a prompt that looks like this:


As the year goes on I remove most of the scaffolds until ultimately the students just get a prompt or question.

I've been happy with it. There's a fourth step, Rebuttal, which I've never gotten off the ground. 

I like frameworks a lot. I like having specific language I can refer to over and over. In a typical initiate-respond exchange, I follow-up by ask students for their reasoning. When students want to whine or talk back to me, I get to ask them, "What is your evidence? How does that evidence support your claim that I'm the most annoying teacher in the world?" (Evidence #1: The fact that I ask that question.)

The key to implementation is that the structure of the class really has to be designed around C-E-R. Even if I'm just direct teaching something, I need to model how to think about the evidence that led to the claim.


I also give my students a whiteboard format now. It usually looks like this:



I used to structure it in Claim-Evidence-Reasoning order but I realized students would then write their claim before they got their evidence.

McNeill uses C-E-R for essentially everything. Any major idea can be written up in that format. I think the framework works best when students truly have something to argue about. Of course, I'd say that all learning works best when students have something to argue about.

I've had a lot of success with using it when students are constructing the kinetic molecular theory or deciding if mass is conserved in a reaction or not. Why things float or sink was fun. "Is Pluto a planet?" is a good one. A lot of Dr. McNeill's papers are written around a multi-week unit designed around the question, "Are soap and fat different substances?"

I've been happy with the results. We used to spend all of our time just generating data. Now that data is being put to use.




----------------------

For more on writing, I show up in an interview in ASCD Education Update titled Improving Student Writing Through Formative Assessments. It's only for ASCD members but most of what I have to say is in Managing Feedback.


Addendum: Kirk shared a free NSTA article called Engaging Students in the Scientific Practices of Explanation and Argumentation. Berland and Reiser in particular have written a number of good articles on the topic.

1: I can't fully recommend the book. It's pretty basic and you can find most of the information from googling around for her various research papers. If you're not interested in dealing with that you might want to give it a go.

Monday, March 12, 2012

Exit Tickets Without the Exit

A lot of us do exit tickets. For me, I just take them, sort them into "got it" and "don't got it" piles, glean whatever information I need and then throw them into the trash.

The amount of time vs. amount of information ratio is excellent. Up until now I had only used exit tickets when students were...exiting. 

Today, I realized I don't need to wait until the end of the period to do this. I had some notes we needed to take care of. Notes are not really my thing. I'm not so great at them. I give the "copy this down from the board" parts typed up and they annotate, draw pictures, do examples, etc. 

Today I was intro'ing the periodic table so I wanted my students to be able to read a chemical formula first. The notes looked like this:


(Note: Box.net is awful. If the embed isn't working, don't worry about it. The file is just a poorly written note page with a dotted line about 2/3 down and a few practice problems underneath.)

I partially cut along the dotted line. I direct taught it (GRR-style) and they took notes and did the example with me on the right hand side. We did the slow way of writing out H2SO4 three times and counting them up and then we did the quicker distributing way.

After a few practice problems the students worked on the bottom portion alone. The top part got pasted into their notebooks and they detached the bottom and turned it in.

Next we were going over to the lab tables (I have lab tables now!) to do Periodic Table card sorting. They had to be able to read a chemical formula in order to do it. Once the students were at their tables and getting started, I started sorting through the papers they turned in. 

I called over the "don't got it" pile, gave them a quick tutorial, and then sent them back to their groups once they had shown me they understood it now. 

It was less than 10 minutes from the time I started sorting to the time the last student went back to their group.


_________

Postscript 1: I see this as slightly different from something like a choice point which I think of as a mid-lesson correction.

Postscript 2: Writing this reminded me of hot reports which I have never blogged about. I can't find the original paper but I got the idea from Avi Hofstein. I will blog about them next but in case you can't wait they're referenced here as well.

(Update: This video on a teacher using Tiered Exit Cards came through my Twitter stream a couple of days ago).



Wednesday, February 8, 2012

Product Placement

This is about honesty.

I went to a conference last week at Stanford. One of the panels was on equity. Hey! I just went to a whole conference on equity. There were five people on the panel. Three came from organizations that deal with policy and research. The two people who were actually educators were from Rocketship Education and a Los Altos school.

That seemed...odd.

Some background for people not living in the Silicon Valley.

Rocketship is well known locally for its blended learning model and for its very high API scores with a high poverty, high English learner population. It is also well known locally for its high dropout rate and for avoiding the local school districts and going directly to the county office for approval of expanding to 30 schools. It's also well funded and if I'm reading this financial report (pdf) correctly, it gets about 30% of its money from "contributions."




Los Altos is one of the wealthiest areas in the nation. Actually, according to this Yahoo Finance article, it is (or at least it was three years ago) the third wealthiest. The school being represented has a 978 API (out of 1000), 4% of its students are on free and reduced lunch (state average is 52%) and 4% of its students are English language learners (24% state average). This year, its educational foundation donated over $2 million dollars to its 9 schools.

Two extraordinarily wealthy schools, one through geography and the other through donations, were picked to talk about equity. Especially when, hypothetically of course, a primary argument that may or may not have been put forth is that money doesn't matter.

So why might these conference organizers choose two educators from schools that might not be the best representatives to talk about equity?

From the conference website:


I surfed over to the NewSchools Venture Fund website and clicked on Our Ventures to see where their money was going.

Oh look......


and sitting on the Rocketship Education Board....


But what about the Los Altos school? She was actually there to talk about Khan Academy. It turns out she's from a pilot school.

Three lines above the Rocketship sponsorship on the NewSchools funding page you get......
That panel makes a lot more sense now.

I have no idea if NewSchools (or SVEF who also partners with Khan) had an agenda. I doubt it. I think its likely these were just the first two groups that came to mind because they work so closely with them.

The credibility issue is killer though. I had a hard time listening to anything after this panel.

The sad thing is I would have been happy to have sat in on a presentation about Khan in the classroom and what Rocketship is doing with blended learning and their data monitoring system. I felt like the organizers were hiding something by sticking them into an equity panel.

Does every conference have to be sponsor-free like Creating Balance? Heck no. I enjoyed the ASCD conference immensely and it was like a NASCAR race. I expected Pearson to come out shooting a t-shirt cannon between sessions. I don't even want to know what went on in the Smart Board party bus.

I don't have a problem with that. I know conferences don't pay for themselves. I'm happy to ignore the vendor tables if it is going to bring my registration fees down.

Just be honest. If I'm going to sit in an hour-long advertisement, let me know ahead of time.

(Final note: I've been sitting on this post for a few days. I didn't want it to seem like I was specifically targeting the parties involved. I want it clear that I'm just using them as an example and the point is really about transparency. It could have been anyone.)

Monday, February 6, 2012

Creating Balance: Complex Instruction

Sorry. Another long post. The next one will be shorter.

At the Creating Balance conference I went to the Complex Instruction strand. I went through the strand with Sue VanHattum who has blogged about it previously. Bree lead a separate session on it. If you don't know what CI is at all start with those because I'm going to skip over the basics. I breezed through the section of Sue's post labeled "Smart in Math" the first time, but facilitators said that expanding the definition of what it means to be smart in math was the most important part of CI. We observed three high school classes of different levels and then sat in about 3 hours of teacher-only sessions.

Things I found interesting:

What I most enjoyed was talking with the teachers at Mission High School. The teachers I spoke most with were named Carlos and Betsy. They were incredibly reflective about it and their journey to CI was pretty inspiring. It started with 3 or 4 teachers going to a workshop a few years back and eventually it spread to all 13 teachers. Hearing about all the work they do together made me have conversations like this with a math teacher friend from NY:

Me:"So....I kind of have a crush on this school."
Her:"Yeah, let's work here next year together. I'm not joking. I will do it."

Sue mentioned creating rich tasks as a major implementation problem for CI. The Mission teachers work together to create them. They have put curriculum binders together and include spaces to write your reflections and how you would change the task for next year. They set up the department so every prep will always have at least one teacher who has taught the course last year but the same teacher won't teach the same course more than three years in a row. I asked Betsy about this second part and she said they felt that you start to go on autopilot after a few years and you should always be looking at the curriculum with fresh eyes.

They worked out a schedule to observe other teachers. They felt this wasn't quite enough so they also worked out a schedule to get together and watch 5-minute clips of each other every month.

There was more. As a teacher who has always been too much of a lone wolf (both because of my personality and my situation) I spent half the day in a sort of daze.

As for Complex Instruction, three things stood out for me.

1. The lack of scaffolding.
2. The complete commitment to "learning together."
3. The focus on mathematical conversation

Sue shared this problem and we did it as well:
Image from Math Mama Writes
The problem was, "What would number 100 look like?" and "What would number -1 look like?"

If I had done this problem it would have started with number 5, number 6, ask for a pattern, etc. They jump straight to 100. For CI, the group is the scaffold.

For number 2 above, a lot of teachers commented on this both positively and negatively. The teachers all were very strict about enforcing that students work together. They had even installed "checkpoints" in their worksheets so that students couldn't move on unless the teacher had come over, asked a random student in the group to explain what they'd done, and checked them off. I don't think I saw a kid who was completely lost, which is rare in any class.

The flip side was seeing the same problems you see in a lot of group work. Certain kids would drag along other kids. A kid would go to the bathroom and the rest of the group would grind to a halt.

You can usually solve the "dragging a kid along" problem with good grouping and the Mission teachers were very attentive to that. I'm not sure what can be done about the bathroom problem.

Maybe its because I'm used to middle school kids but I was impressed with the amount of mathematical talk that came out of the students mouths. The teachers all did occasional participation quizzes and focused far more on how students were working together rather than the correctness of the answer. In the pile pattern problem above, Carlos said that the -1 question is really what he cared about because, "That's where the really good conversation happens."

Things I haven't resolved in my head yet:

I had written more but before I finished writing this post SciAm posted an article titled, "The Power of Introverts." We spend a lot of time in groups in my own class but if I were to give the students something like the pile problem I'd always ask them to try to think about it on their own first. I know I need quiet time to think first.

(Update: I feel the need to point out that whenever I see something that says classrooms have too much groupwork. I.....don't see it. I mean, I know people talk about groupwork, but I've been in a lot of classrooms and most of them are still teacher in front and students in rows the whole period. I think there's still too much of that but I feel negatively about all groupwork all the time. "Creating balance" was the name of the conference.)

Betsy arranged her class so that Mondays looked like traditional school. She'd mini-lecture and kids would read the textbook or do worksheets. She said some kids just needed that individual time. (Like I said, reflective group at Mission).

The other thing I struggle with is anytime I need to "teach behavior" or norms. I just never know when I'm teaching something universally valuable and when I'm imposing some sort of "cultural other" upon them. Am I teaching kids how to work together or am I teaching kids to copy an image of what I think kids working together should look like? What am I asking students to give up in order to be academically or socially successful in my class?

I read too much into this. Perhaps this is why I'm an awful classroom manager.

Saturday, February 4, 2012

Link Dump

I had to turn in my laptop so I've been largely stuck on an iPad at home. The result has been my consumption has gone way up and my production has gone way down. Something to think about if you're considering tablets for your classroom. Right now I've just got some links to share. Like always, this will be twice as long as it needs to be.

THE GOOD

My blog post, Test Deconstruction, was cleaned up and published in ASCD Express. Thanks to Laura Varlas for that.

Stephen Lazar wrote a fantastic piece in the NY Times Schoolbook on school reform. I love everything about it. I'm pulling out two quotes:
I used to think we needed to create model schools that could then be replicated. I now think that it is so hard to sustain a model that each school needs to be invested in its own unique vision.
and
I used to think our goal should be to create systems of great schools. I now think great schools are so hard to create and maintain that our goal should be to create good and sustainable ones.
I went to a conference this week and tweeted this:








I dislike the concept of "incubator schools" so much. First, what are the rest of the schools doing? Just sitting around waiting to be told what "works"? Second, like Stephen says, the whole idea of "scaling" has been pretty well debunked.


Last good one is this set of questions from Federal Way schools called, "How to talk about Standards Based Grading." I like how they included questions for parents to ask teachers. I would have liked it even more if they included something to help students talk to teachers.


THE BAD


Joanne Jacobs, who somehow lives a few miles from me but occupies an entirely different planet, posted a story about a bunch of kids who quit their varsity basketball team. The coach felt they were disrespectful. The kids felt the coach wasn't respecting them in the first place. The news article doesn't offer too much more information than that. She closes with:
Lesson not learned, apparently.  Good luck in your first job, Eddie. And your second job. And, if you continue to be a slow learner, your third job.
The commenters pile on. Two things here:

1. Joanne apparently wants kids to learn the lesson that, "You take whatever your boss says and does no matter what."

2. What I CANNOT STAND is how we as adults automatically take the side of the other adult. I grew up playing sports and I've had plenty of jerk coaches. I've also been disrespectful as a player. As far as I'm concerned, the kids had a right to quit and the coach had a right to cut them. I don't have any more information to decide anything else. There's no reason to put all the blame on the kids and yes, I'm really talking about how we automatically take the sides of adults in our schools without even listening to the kids.


(Update: Local writer defends the students here and has a copy of a letter sent to the principal from the team.)


The other link I have to share I'm not actually going to share. I think he's a racist and I have no interest in even accidentally sending one of you over there. On the other hand, his traffic is at least a couple orders of magnitude higher than mine so you might know who I'm talking about anyway. He shared the ETS report  that includes this graph on page 22:

and then this quote:
My guess is that smarter teachers would probably be a good thing, so we ought to be thinking about ways to make the job of teaching more attractive to smart people. In general, smart people don't like dealing with knuckleheads, so forcing teachers to carry most of the burden of discipline, a growing trend in recent decades, is a good way to keep smart people out of the business. You can instead use some of those gym teachers to run after school detentions instead of delegating most of the disciplining down to the teachers as happens in so many public schools desperate to avoid disparate impact lawsuits by not generating a paper trail of discipline actions carried out by the administration.
....
deep breath
....
give me another second
...
I'm going to ignore the gym teacher comment because if you're at my blog you already know how ridiculous that is. Also I'm going to ignore the logical conclusion that people who aren't smart enjoy "dealing with knuckleheads." And also I'm also going to ignore that this report comes from a company that has a vested interest in making us think that educational testing matters.

What I will mention is this idea that smarter teachers equals better teachers.

Disclaimer: I'm not Grace, but I did score high enough on my SATs to be off of both of those scales so I don't think this is sour grapes when I say that it's quite a reach to first link SAT score to IQ and then IQ to teaching ability. This is a classic example of someone with an agenda taking some data and making it tell the story he wants it to tell. If you start off with, "My guess is....." you're not allowed to follow it up with such certainty. I'm going to counter with more graphs:




Clearly the way to be a better blogger is to be taller. Wait. I can do better than the unnamed blogger. CORRELATION SUCKAS!



My guess is....tall bloggers would be a good thing, so we ought to be thinking about ways to make blogging more attractive to tall people.

Saturday, January 14, 2012

Status Change

I'm at the Creating Balance conference and have been thinking about status. I'll write more about the conference later.

I've got two things to share, one classroom and one schoolwide (neither are original to me) that I think help address issues of status.

Classroom:
Robyn Jackson had an article on ASCD where she describes a red flag system she used to immediately catch kids as soon as their grades fell to certain levels. I would like to think one day I'd be organized enough to pull something like this off but today is not that day. One thing I liked was that she would preview the lesson for some kids. I know I fall into the trap of just catching kids up and I liked this immediately.

The hard part for science was that so many times I couldn't really preview the content well. We'd be developing a concept in class and it was hard for me to figure out a regular schedule where I'd be able to preview content ahead of time without giving away what he or she was supposed to be figuring out.

Where I modified this to fit was with classroom behavior. If you're a science teacher you know that when you have a lab, there are some kids who you have to just sit on. As soon as you get a lab going they're mixing random chemicals together or wandering around to talk to friends or whatever.

For about 10 kids I started previewing our labs. The day before a lab I'd ask some kids to stay after school and then spend about 10 to 15 minutes showing them the equipment they'll be using and making sure they knew how to handle and use it. I'd show them what their eventual setup would probably look like and some common pitfalls. I'd let them know what they were trying to figure out and, depending on how much I could give away, preview/review some content.

Status change right? These 10 kids didn't start off lost and immediately could contribute something valuable. For half the kids it was magical. They were like new students. They were in there leading the way. I took secret pleasure in watching the low status kid instruct the future valedictorians on how to use the overflow canisters or admonishing someone for grabbing the beam on a triple beam balance. There were another three who would start off well (the parts they had previewed with me) and then start to lose it when they got into new territory. Two kids it didn't matter.

Overall, really good bang for the buck and, just as a teacher, I've had a nice mental shift from always looking for how to review to also looking for ways to preview.


________________________



School:
A couple of years ago I went to visit a nearby school. They dedicated themselves first and foremost to principles of community and it showed. They do this sixth grade orientation that I shared with a teacher at my (former) school and she started it up. She, by the way, is a far better teacher than I am. I take zero credit for this other than sharing the idea.

The middle school first works to identify fifth grade students who are at risk of dropping out, mentally or physically. They invite those students, perhaps a dozen, to a three day orientation a week before any other 6th grader starts school. Those students are all assigned an adult mentor and they're shown around school and introduced to the quirks of middle school life. There's a bbq on the last day and the parents are invited.

Here's the part I love. On the first few days of school those students get a special (colored shirt? a giant button? I can't remember) that identifies them as someone that other students should ask for help. For example, a huge thing for sixth graders at that school is how to open their lockers. These students know because they've been practicing. Again, status change. At the old school, I was the dummy. Here, I'm someone who other kids ask for help. The adult mentors check in from time to time but the bulk of the work is accomplished in those first few days.

At my school, the teacher in charge also teaches the Leadership elective (they do rallies, dances, fundraisers, etc) and she monitors them as sixth graders and tracks them into the leadership elective as 7th graders.

When I first visited the school, the principal reported that since starting the orientation (perhaps 6 or 7 years at that point) every single kid involved had been able to participate in graduation ceremonies after their 8th grade year.

I can't tell you how much I love this.

Our default action for an incoming at-risk fifth grader would be to schedule an extra "intervention" class. In other words, a student comes to a new school and the first thing we do is confirm that his/her status has managed to follow along. And yes, I'm looking directly at you Ms. High School Counselor who gives our outgoing 8th graders math support, reading support and no electives as freshmen.


Addendum: Bree just wrote about a presentation she did at the same conference addressing issues of status.

Sunday, January 8, 2012

Managing Feedback

(I'm going through the queue! Just imagine this all re-written in the past tense. Or the future tense. Either way.)


Good written feedback is hard. It takes time. A lot of time. Here are three things I've been trying to help make it manageable. Only one of them is slightly original.

I've already written about how I help students give each other feedback. That will go a long way but in the end it always comes down to us. As flat as some of us want to make the classroom, we're still the experts.

1. For the nuts and bolts I use Excel. I've got all my students exported into a spreadsheet. I type it out, print, cut strips and staple. Actually my student aide does the cutting and stapling. This is another place where tip #5 comes through. I can go through a pile and head straight down the sheet. When it comes time to staple, the excel sheets and the pile of student papers are in the same order so my aide doesn't have to do any paper shuffling.

I write messy so typing is always good for me. I like having a record. But the biggie? Copy and paste. Students make the same mistakes and need the same help. When it's a long assignment I number sections of the paper or problems and the students match those numbers to the comments.

2. Focus on one or two areas at a time and think long term. I'm going to use a lab writeup as an example here but it could be anything. I used to get these back and write all over them. There'd be so many marks students couldn't even see their original work. They were overwhelmed. Now we focus on just improving one area at a time. Perhaps we all focus on improving the description of the experiment. I want a few more details and better linking between how this experiment will address whatever question they have. They just work on improving one aspect and then we move on to something else later. By the end of the year, we've hit everything. What I didn't get was that a good lab writeup was my end of the year goal. I didn't need a perfect one in October. What I needed to do was improve a little at a time so that it was great at the end of the year. I wrote more targeted feedback and students weren't overwhelmed and knew what steps to take next. Win all around.

3. Delay feedback until the students are going to use it. This is the only piece of advice that goes against the grain. I know we're supposed to kill ourselves with 24-hour turnaround or get instant feedback or their work should be self-correcting. Yeah. That's fine sometimes. But I am very guilty of writing up feedback, giving it to students the next day and then....we just move on to something new.1 It is far better to wait until that feedback is going to be used. The lab writeups from above are good examples. If you're not having students revise what they wrote then why not wait until just before they do their next writeup?

Another common situation when delayed feedback is useful is when we work to continually revise a concept. Right now we're trying to figure out why things float or sink. I'll ask a student to predict if a certain object will float or sink and justify the answer. We'll work on figuring it out.2 Later we get the same prompt, because really that's the whole point of the unit. Before answering the student gets his or her feedback returned and can read it before answering the question. It's fresh and they can actually act on it. As an added bonus, students read what they used to think and see how much their thoughts have changed.





Oh and here's my obligatory BlueHarvest shoutout.



I'm definitely interested to hear if anyone has any good tips for keeping written feedback manageable.

Addendum: I forgot I totally stole Justin's idea to ask students what kind of feedback they want. At the end of a test or paper or whatever I put a little box asking some variation of What kind of feedback do you want? Or What would you like me to comment on? It turns out students respond well to feedback they want. It also turns out that I was often leaving feedback they just didn't care about. Who knew? You'll want to be specific in your questions at first. I found it helped to ask them for a specific area or question number. It will also depress you what students consider feedback. "I would like a gold star at the top of the page and an A."

____________

1: The corollary to this is giving student work back and then getting all hot and bothered when it ends up crumpled up in the bottom of a backpack. If you give a test back and don't want it to end up on the ground, do something with it right away.


2: I usually go from weight to density to forces. Density we get from playing around with film canisters and then later by using different liquids. Forces we get from linking back to atoms. I direct teach Archimedes principles because I think its overrated and don't want to spend more than a day on it but feel free to disagree. If you've got something awesome for floating and sinking let me know so I can steal it.

Monday, January 2, 2012

Survey Results

Sorry I've been incommunicado lately. I've been working hard establishing that a single person can in fact get the flu three times in a single month. Now that I've figured that out, you don't have to! I'm a full service blogger. Also, my seven day beard doesn't give me a "rugged outdoorsy" look. More like "slice of wheat bread left in the refrigerator for two months" look. So it was a learningful December.

An artist's rendition of my chin.
http://upload.wikimedia.org/wikipedia/commons/e/eb/Mouldy_bread_alt.jpg


Here are the results of the survey I posted umm...like two months ago at this point.  I assumed 90+ = A, 80-89 = B, 70-79 = C, 60-69 = D, and 59 and below are Fs. (I should have clarified that ahead of time). If you entered in a letter grade I converted it using that scale and if you did a range I put the mean.


I don't know why I put F first on the graph. It's bothering me now to look at it. Plus I had to keep the next graph consistent. It's depressing me. Let's just proceed. The next graph is broken down a little more.



Scores ranged from 0 to 90. Both As were 90. There was a score of "butt" which I wasn't sure how to code and a certain blogger in the Rocky Mountain State left an answer down to the third decimal place.

What am I supposed to take from it? Well I'm interested in what you think. Comments are definitely wanted.

Take a second to think it over before I tell you a couple of my thoughts.
.
.
.
.
.
.
.
.

I've seen this done a few times and what I think it shows and what the presenters thought it showed were different.

You can't take this as an example of how a (10, 5, 4) point scale or Advanced/Proficient/etc is superior to a 100 point scale. That was the purpose the first time I saw this. Obviously we're going to agree on the results more if we're only given 4 choices. Hey you've got one choice now. We all agree! Moving on.

I also don't think this shows the "arbitrary nature of grading" quite in the way that this has been presented to me either. I'm supposed to look at these results and say, "Same test! Same knowledge shown! Different grades! If only we had a (rubric/checklist/Pearson sales representative)."  The whole point of the exercise was for you to make up your own scoring system. If you give us one, we'll agree more. If I had an actual test with actual student answers, then we could start that convo.


This, by the way, is something that's definitely worth doing with your department. Copy an actual test a kid has turned in. Don't mark it first, because it's also important to see what different people actually count as correct. Then talk about it.




What do I like about this? It's not so much the arbitrary nature of grades I think this gets at, but the very personal nature. I like that it helps you confront your own values. The very first time I did something similar (maybe 4 years ago so I'm going to fudge the numbers but the spirit is the same) and my thought process went like this:
Well it looks like this student mostly got it. I mean he got all the easy stuff. The super hard stuff he missed everything but I don't really expect him to always get stuff that I didn't directly teach. He maybe deserves about a B. So...hmm..I'll assign 10 points to the easy MCs, 5 points each to the short answers and like 5 points each to the hardest questions. That way they can get everything right except the hardest and still finish with a B. Plus I don't want the hard answers worth too much because those are kind of double jeopardy points. You miss a little of the easy and you're probably going to miss that part again on the hard part. So add it up and he gets an 85. Yeah that seems about right. I'll keep that.
I turned to the teacher at my table and shared my unassailable logic with him. His response:
I don't give a rat's ass if a kid can bubble in some memorized answers. If he can't think, he fails. 30 points each for the two hard answers. 1 point for the MCs and 5 points for the short answers. He got 35.
And you know what? We were both right. I was amazed at 1) How different our reasonings were 2) How little thought I had previously put into what a grade actually means to me and how it communicates your values.

Did you decide on a grade and go back and tweak points? Did you ignore the point totals entirely and just give a score? Did you assign totals, add them up, and just went with it? Did you add them up, decide you didn't like the final score, and go back and change things around until you did like it?

I am 100% guilty of having printed out a chapter test from whatever CD my textbook came with, giving it to my kids, started scoring them and thought, "Huh. I don't think that paper really deserved that score." ......and then doing absolutely nothing about it.  I enter the grade and move on.

Later, I'd think I was being "better" by reading into the answers and throwing them a few extra points when they need it.

I think I know what she meant. I mean, she contradicted herself three times but I guess I know where she was going. I can see how she'd think that. Plus, I know she probably could have got it, she just maybe ran out of time or was distracted by Jacob sitting next to her tapping his pencil the whole time. Jacob was driving me crazy with that. Plus she always works hard so I know she gets it. Maybe she just had an off day. hmm......  B+
Both of these are wrong. My problem with the first is that how I assessed and what I wanted grades to mean didn't align. The second was brutal for me to face up to. I've let her not learn something and told her she's learned it. Why was it so unthinkable for me to just ask for a clarification? Or to tell her she didn't get it and try again. Argh. This is giving me an eye twitch just thinking about it.

There's no right answer to the survey. The important thing is that there's a right answer for you. When you know why it's the right answer, you can go back into your classroom and make sure that everything aligns with that answer.