Wednesday, March 30, 2011

Session 6: Transforming the Early Childhood Classroom

I couldn't find the presenter's name in my notes. (Edit: Meghan Callahan. Thanks Jenny) I'll update the post when I find it.

I went because I wanted to see how the presenter integrated Patterns of Thinking in her class.

Main idea: Guided play is part of a rich learning experience.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics:
  The presenter is a head start teacher. Her presentation was on a cycle of play so the notes will focus on what that cycle looked like.
  1. Assess literacy skills and background knowledge. Provide relevant experiences and make connections to current topic.
  2. In a guided play session, give the play structure and occasionally step in to elaborate and move it forward.
  3. Connect the play experiences with topic discussion, focusing on roles, actions, and vocabulary.
The thinking skill she was working on was part-whole relationships. While I'm describing what happened, imagine the teacher constantly emphasizing, "This is a part of.... This is a whole made of ...."

For step one, she asked the students to make predictions about a firefighter visit. She prompted them to look and see if their predictions were correct and to notice what else the firefighters bring that we didn't predict.

When the firefighters came she had them engage in guided discovery. In addition to the previous prompt she asked them to walk around the fire engine and, "If you see something you're curious about, ask about it." The firefighters later commented that her students asked far more rich questions than other questions.

I always appreciate these kinds of loose structures. I know some people argue for open choice but it's paralyzing to have too many options. There's an example I've seen a few times where people are asked to, "Name things that are white." vs. "Name things that are in your refrigerator that are white." The second prompt generates more examples. This also relates to a point that Fisher brought up about cueing. How we perceive things is directly related to our level of expertise. I look at most art and can't tell the difference. I look at it but I don't really see art. If you said, "Pay attention to how the artist uses color to convey emotion," I would have a far richer experience. 

The next section was probably a more typical school experience. Students discussed what they saw. Created labels and drawings and a concept web. They read related books. One point the teacher brought up is that students would often describe things in terms of function so she needed to stay persistent with using the vocabulary.

That's true for my 8th graders as well. I don't know how many times I've heard a triple beam balance described as "the mass thingy."

They then planned and built a fire truck. There was a lot of nice stuff here. They were practicing sorting and comparing sizes. Best of all they kept referring back to their plan.

At some point you probably asked, "Why is Jason going to watch an ECE presentation?" Well, here's another reason. My kids will create elaborate build or lab plans and then completely abandon them when the tools hit the table. Obviously it's something I need to reinforce. 

Next came the guided play section. The main things that separated this from standard free play is that the teacher set the scenarios (Pretend you're going to the fire, then at the fire, then coming back) and that she was there to help extend them.

When my oldest daughter, who is the same age as the students in the videos, plays firefighter it looks like this: She's sleeping. The alarm rings and she gets up and runs to wherever this fire is. She sprays it with an imaginary hose for 3 seconds and then returns to the fire station and goes back to sleep. Repeat ad infinitum. If I were helping extend her play I might ask her, "What else would a firefighter do at a fire?" And yes, this is completely relevant to when students are conducting their own investigations.

Finally, they wrapped up the cycle with discussion and various literacy experiences.  Again, focusing on part-whole relationships. The whole process took about 2 weeks.

Takeaway: The focus of the presentation was on guided play and integrating part-whole relationships into the standard curriculum. That was good. But for me I was struck by the false dichotomy that science teachers tend to put forth about process and content. This was a rich learning activity that focused on both process and content. It's not either/or. This teacher did both. The students learned about relationships, sorting, planning, and questioning. But they also practiced literacy and counting and learned the "facts." Content and process are not in competition. If you think you're going to teach one and not the other, you're not actually teaching either.

I admit I may have completely missed on this one. Novices experience things differently remember? I'm going to defer to Jenny here to catch me in my errors.

Tuesday, March 29, 2011

Session 5: Gap Closing Strategies in Mathematics

I'll admit I was skeptical going into this. However, there's a lot to like about this program and there are definitely a few good lessons here.

Main idea: Ontario Ministry of Ed created an intervention program for 6th grade students. It had great results for the students but also led to sustainable change in teaching.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics:

They shared everything (way to go Canada!) and so I won't waste your time describing the specifics of the program.  As an overview, teachers identified kids who were struggling in math. The Ministry sent over binders for each student. It was up to the teacher whether to use this as pull-out, after school, or in class.

There's a lot I like about the program. They had a diagnostic table (If student missed number 1, then they should...). It focused on open ended questions, metacognition, and generating visual representations. Students didn't have to do the modules they understood. Download a couple of them. I'm betting you could steal a few of those ideas even if you don't teach math.

Using a pre/post test model the kids in the program completely eliminated the gap between them and the kids not in the program. The gender gap disappeared as well.

In interviews with students, they requested(!!!) more practice. The Ministry obliged and created e-modules.

Now here's the part I really like:

After talking to the teachers, the Ministry found that the teachers were changing their own practices after observing both the students' enthusiasm and seeing the results.

I heart this so much. I didn't ask, but I feel like this was intentional. At least, I feel like the ministry had hopes that this would happen. I fully support this kind of subversive change.

They also found this weird trend. It turned out that teachers who were just told to do the program by their principals, as opposed to opting in, were more supportive of the program. They felt they had more support.

Takeaway: I like how Ontario approached this. First, they didn't try to attack some broad nebulous goal. Something, oh I don't know, I'll just pick something random like, "All 8th graders in California must take Algebra 1." Each module is very specific and targeted. They deliberately set out to model good teaching practices. They also deliberately did something different. I don't know about your programs, but whenever we get some sort of targeted intervention curriculum at my school it looks exactly the same, but more. Oh, you didn't get how to divide fractions after 30 problems? Here's 50 more! I'm dangerously close to turning this into a 1500 word rant so I'm going to end here.

Monday, March 28, 2011

Session 4: Responding When Students Don't Get It

Eventually, the session will be archived here. I recommend it. It peters out a bit at the end with the teacher vids but the first part is definitely informative. I'm guessing this is information from the book Checking for Understanding but I didn't confirm that.

Like Guskey, Fisher is worth seeing. He's an engaging speaker and he's able to add a lot more subtlety than what's in his books. I wasn't a huge fan of his work before but now I'm definitely going to take a second look.

Also, I don't know the details here, but at least at one point he was actually in K-12 schools and even teaching classes. A higher ed guy who's actually working in schools? Insane concept. Frey is less interesting as a speaker but I think they realize that. She has smaller speaking parts but they make a good team.

Main idea: When a student doesn't get something, we do the work and jump in. Don't.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics:

Fisher focused specifically on what we do when a student doesn't "get it." In this case, he's talking about either doing a skill or answering a question, not in a global failing-the-class kind of way.

There was a flow chart in his preso, which I think you can download, about what to do at each step if a student was stuck. The flow chart went like this:
  1. Start with a robust question.
  2. Prompt
  3. Cue
  4. Go to direct explanation and modeling.
Robust questions are designed figure out what a student is thinking.  They should uncover what the errors and misconceptions are so we can respond. Fisher identified six types of questions.
  1. Elicitation
  2. Elaboration
  3. Clarifying
  4. Inventive
  5. Divergent
  6. Heuristic 

Our difficulties problems come after the question. We don't have a strategy planned if a student gets something wrong and our instinct is to automatically go to step 4 and explain. If we immediately move to explaining, the student becomes dependent on the teacher.

I like things like this. I don't think of myself as an instinctive teacher. However, I'm very good at identifying certain weaknesses, researching how to improve, and putting that into action.

When we identify an error, we need to prompt, not takeover. A prompt is about getting something going in a kid's brain. The most common prompts are background knowledge or process prompts. For example, prompting the broken rule (PEMDAS) or recalling certain knowledge. We also use reflective prompts, like, "Does that make sense?" He also talked about heuristic prompts. These were prompts for strategies, like "Why don't you make a graph and check?" Fisher stressed the importance of students developing their own strategies that worked for them.

The next step was to cue. Cues say, "Pay attention here." Cues let novices see things through the eyes of the expert. Fisher gave a good example of when you watch Olympic diving. It all looks the same to most of us. An expert can slow it down and point out certain things. When you're an expert, you can pay attention to more things simultaneously.

This is a good point about cues. I know I can fall into that huge trap of expecting students to experience something the same way I do. 

Types of cues:
  1. Visual 
  2. Physical
  3. Gestural
  4. Positional
  5. Verbal
  6. Environmental
Ultimately the idea is to get kids to pay attention to the relevant details. Fisher brought up a good point that we're really good at using cues in our initial teaching, but when a student gets stuck, we usually just tell them additional information.

When all else fails, go for direct explanation. Even then, Fisher had some good advice. First identify the error and explain. Think aloud while you're explaining the error. Finally go back and monitor. Re-assess somehow to make sure they actually get it.

"Telling and leaving" could describe a large portion of times when I say I'm helping a student. Modeling by going through the think aloud process and then going back to monitor are crucial, but often forgotten steps.

He finished with some extended videos. They weren't great but I liked that he acknowledged they weren't perfect. He also acknowledged that this whole process is much easier in small groups. You'll lose the whole class if you go through all this with one student. You have to have them engaged in something else.

He finished by talking about what a huge challenge it is to undo the expectation by a student that they'll just be told the right answer.

Takeaway: I don't know if there are better models out there. I wouldn't argue it's perfect. But I do know that it's better than our standard, "Ask a question-if wrong-answer it ourselves," method. It's not just a poor learning strategy. Any high school teacher with a student who is used to just being told the answer can tell you what kinds of damage we do to a student's feelings of self-efficacy with our standard methods.

When the archive goes up, I recommend you watch at least the first half.

Addendum: The Science Goddess blogged this session as well. She's got her notes and a link to the handouts. 

Session 3: We All Make Mistakes

I'm skipping over a few sessions. I didn't take much out.

Main idea: Great teachers create a culture of redemption in their own classroom.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics:

Homewood City Schools examined the teachers in their own district. After identifying the teachers who stood out, they created a list of five things that separated those teachers.
  1. Ability to question effectively with probing rather than evaluative questions.
  2. Planning, but not in a "weekly lesson plan" kind of way. The presenter phrased it as, "The daily grind of who has it, who doesn't, what do I have to do tomorrow to make it work."
  3. Clear learning goals that were shared by both the teacher and student.
  4. Relationships. Not caring friends, but focused on learning. 
  5. Culture of redemption. How they treat failure. 
The focus of the presentation was on number 5.

Again, like the McREL session I don't think there was anything surprising but it's nice to see these same things come up again and again.  Bryan Goodwin from the McREL session had a nice quote, "People ask what innovation is most needed now? Applying what we know."

It turned out this session ended up being on low-stakes, ongoing formative assessment. Not new for any regular reader of this blog so I'm going to skip the presentation and just focus on what interested me.

I found it far more interesting the attention to teacher quality in this tiny district. They had created a data warehouse even before NCLB went into action. I don't have the full list of what went in there but it included at least a dozen things. I caught SAT 9, DIBELS, teacher qualifications (degrees, SAT scores, experience, etc) and attendance.

The example she used was based on SAT 9 results. They took the student data from spring to spring and looked at growth. Then they found the teachers who were three standard deviations above the average. From that group, they then took only those teachers who managed to do it for three consecutive years.

Then they went into the classrooms and did observations, interviewed the teachers and students and found their five things.  I don't know how deeply this went, but I know to some degree they focused their professional development on these things.

I have to say I love this. I'm not saying you should base everything off SAT 9 results, although I'd argue that those same teachers would have kept popping up on whatever meaningful metric they used.

But I'm impressed that this district was focused on teacher quality and set about to figure it out. We get into these big arguments about how to measure teacher quality and we don't end up actually doing anything. They just went ahead and did it. It was the same lesson as the Apgar test. The measure might not be perfect, but it's better than the big nothing we had before. It wasn't punitive and they didn't tell teachers, "Anyone scoring below xyz is going to get fired." They also dug deeper with the interviews and put the data into context.

Takeaway: I don't think it's possible for me to stress how important it is for a district to do these sort of studies on their own teachers. I read a ton of research. I'm a fan. But one thing you always need to keep in mind is that context is everything. I can read about grit, or high expectations, or warm demanders or whatever comes up in the research but how this looks for your group of students is what matters.

Saturday, March 26, 2011

Session 2: Grading Exceptional Learners

I'll want to blog more in depth on this later. This is going to be a ton of notes that I'm dumping on you.

First, if you ever get the chance, go see Thomas Guskey in person. He's passionate about his topic and wasn't afraid to draw a line in the sand. I've found his books to be dry so he really surprised me. His co-presenter was Lee Ann Jung.

Main idea: Grades should be both fair and meaningful.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics:

A high quality grading model would include the 3 Ps: Process, Product, and Progress. They don't necessarily have to all be number/letter grades but they should all be kept separate. He doesn't advocate for one being more important than the other. That can be a site decision.

Teachers argue that grading the 3 Ps takes more work. Teachers who actually do this in other countries argue that it takes much less work. They're gathering the same information as us, they just don't bother to use complicated formulas to combine them into one score.

This is my argument too for the "it takes more work" complaint.

The problem with our grading systems is we don't agree on the purpose of grades. So we come up with systems that try to support all purposes and end up not serving any of them. Most schools fail with report card reform because they disagree on the purpose. Consensus amongst the staff of the purpose of the report card is the first step for report card reform.

Grading programs only make this worse. They're based on "antiquated" notions of grades. The best schools develop their own systems.

Shout out to ActiveGrade.

There isn't a grading style more prevalent that does more damage than a percentage system. Nobody can distinguish between 101 levels of quality. He also brought up the zero issue.

Not only do we disagree on the purpose of grades, we disagree with what counts. For kids it ends up as a big guessing game and grades become a mystery.

Grading and reporting are not essential to the instructional process but checking is. Grading is evaluative but checking is diagnostic.  Teacher is asked to be both advocate (checking) and judge (grading). We are aware of the tension when it comes to principals being asked to be both advocates and judges of teachers, but don't acknowledge we are in the same position as a teacher.

Grading and reporting should always be done with reference to learning criteria, never on a curve. A hidden example of grading on the curve is selecting a class valedictorian. Guskey pointed out that the word "valedictorian" actually comes from "to say farewell." They're the person that gives the speech. There's nothing that says that they have to be GPA #1. Even colleges don't do this. They give criterion-referenced awards (e.g., cum laude).

Guskey gave some interesting statistics. For the entering class of 2008, Duke rejected 58% of valedictorians, University of Pennsylvania rejected 62%, and Harvard rejected 9%. Highly selective schools are more concerned with the rigor of your coursework than your class rank. He gave the impression that this had to do both with the preparation needed to succeed in college and with the general meaningless nature of comparing grades from school to school.

I'd add that comparing grades from teacher to teacher at the same school, even within the same course, is also meaningless. 

I'm going to summarize about 30 minutes here: We're screwing over our kids by modifying grades for them. The kid who "tries really hard" so you change her grade to a B. The mainstreamed kid who you don't want to fail, but don't think he deserves an A or B, so he gets a C or D in every single class he takes.

Accommodations level the playing field but don't change the standard. Probably what you get in an IEP are accommodations, like a student getting extra time. These do not need to be reported on a report card.

Modification do change the standard. These must be communicated. Modifications, at most 5 or 6 per student, need to be specific, measurable, attainable this year. After modification is created, apply standard grading practices to it. It should be written out and reported on the report cards and the transcript. 

The gen ed teacher is valuable in this case to define grade-level criteria.

It is illegal to report the exceptionality of a student, however it is legal to report the level of skill. Thus you can report the levels of skills on the report card.

We should collect data. Most commonly in these cases we have narrative reports. Some argue these are more rich, but nobody ever goes back and summarizes.

Guskey and Jung then spent the next twenty minutes sharing an example of a report card which I don't have a copy to show. It was a standard report card with an asterisk next to modified standards. Attached was a report showing more detailed information for that modified standard. It included an annual goal and a quarterly objective. There were narrative reports of what specific accommodations were made. The modified grading scale was shown. In this example they used a 1-4 system. The 1 represented where the student was right now. The 4 was the objective goal. 

Takeaway: When creating a reporting system, start with the purpose and then work backwards. For exceptional learners, modify the expectations to make them attainable but report those modifications. Don't leave it up to the teacher to make arbitrary grade modifications.

I plan to blog more in detail about this when I can get a hold of some visuals to show you. It was a good session and a lot of food for thought.

Session 1: Changing the Odds for Student Success

Well, Option A ended up being two blocks away (not walking in the pouring rain). Option B was packed. So I snuck into a session by Bryan Goodwin from McREL. The session was a review of this report. Here's a quick-ish report. When I get around to reading the full report I'll give you more.

Main idea: Great schools have layers of support.

Things I found interesting, but not necessarily that I agree with. My quick thoughts are in italics: 

The primary job of a principal is to raise the quality and reduce variability in the quality of teachers in his/her school.

The primary job of the district head is to raise the quality and reduce the variability in the quality of schools in his/her schools.

I don't think you'd find people arguing about the quality issue, but you start talking about reducing variability and people start freaking out (sometimes legitimately, sometimes not) about racing to the middle. It's like watching The Incredibles. I actually agree with the variability issue. I'd take a group of reliably good teachers/lessons/schools/etc over the occasional chance at greatness. I have no idea what the majority would take.

Five qualities of Changing the Odds schools:
  1. Engage in collaborative goal setting
  2. Establish non-negotiable goals for achievement and instruction
  3. Ensure school board alignment and support
  4. Constantly monitor goals for achievement and instruction
  5. Use resources primarily in support of instruction and achievement goals
Probably nothing new here. There was a list of things that didn't matter as much, which is probably more interesting. I didn't have time to write it down though. 

He brought up district and school dashboards, which we definitely don't have. I'd be interested in seeing any schools or districts that really put out all their data for display. Dan Meyer had a post a while back on that but the links are busted now. 

Goodwin called low performing schools "Forrest Gump Schools" because you open them up and you never know what you're going to get. 

Goodwin also took time to point out the What Works series by Marzano. It doesn't work as a checklist. You can't put the 13 things on a list, check them off, and get great instruction. Great teachers know why they work and when to use them.

I found McREL to be more touchy feely (in a good way) than I expected. Goodwin spent a lot of time talking about "warm demanders," Dennis Littky and this school, the HighScope Perry study, creating literacy and imaginative play environments at home, and "personalized pathways that tap intrinsic motivation." It was a nice turn of events for those who argue that these kinds of research focused organizations see kids as just data, not people.

Takeaway: The key is both challenge and support. You need high expectations but you also need a support system to help get there.
(EDIT: I forgot to add this and I really liked it, quoted from Goodwin, "People ask us what innovation is most needed now? A: Applying what we know.")

Personal story: The state of California set a goal for all students in 8th grade to take Algebra. They "encouraged" this by docking points off the API scores of kids who took a general math class.1 Our school responded accordingly. But did we provide the supports? No. We responded by simply eliminating our general math class. High expectations are not enough. It seems obvious but so often people, and I'm including feds, state, local, and teachers, seem to think that just by raising the standards we'll automatically see better results. 

1: A kid scoring proficient in General Math is equivalent in terms of API points to a kid scoring basic in Algebra.

Thursday, March 24, 2011

Blogging the ASCD Conference

On Saturday, Sunday, and Monday I'll be blogging the ASCD Annual Conference in San Francisco. Check back on this blog, and a few others, this weekend. I'll try to shoot out some quick posts after each session and a few longer ones later when I've had time to reflect and digest.  Saturday is assessment heavy but I promise to be more diverse on Sunday and Monday.

If you're going to be there, say hello.

I'm going mainly on session title and a few recommendations but I'm planning on switching up if it turns out they're selling something.

As of now, here's my Saturday. If you've got any inside info (good/bad) about the presenters or presentations let me know. The program booklet is online:

8:00 (wait...what?!?!? 8:00 in the morning???)

Option A:
1129 Conferring with Students: Practical Strategies that Close the Achievement Gap - Patricia Reynolds from the NYDOE.

Option B:
1102 Beyond Reteaching and Regrouping: Using Data to Dramatically Improve Instruction - Trent Kauffman, Education Direction


Made to Stick with Chip Heath. (Loved this book and also Switch.)

The over/under on how many times I make the joke, "I won't take notes on this, because I'll just remember it." = 6.5


Option A:
1222T Fair and Meaningful Grades for Exceptional Learners - Thomas Guskey

Option B:
1254 Doing Whatever it Takes: Barrington's Journey to Extraordinary - I really wanted to go to this one because I've been trying to get my school to put in a flexible time schedule, but I can't pass up on Guskey. If you're going, let me copy your notes.


Option A:
1302T Leading by Design: Leading Understanding by Design-Based Reform at the School Level - Grant Wiggins

If Wiggins ends up just reading from his book I'm going to sneak out to go to

1347T Boosting the Cognitive Complexity of Instructional Tasks and Assessments - Rebecca Stobaugh, Western Kentucky University


I'm pretty meh about this time slot. I'll take any recommendations.

Option A:
Finding and Keeping Great Teachers - Scott Herrman, Margaret Clauson

Option B:
Interventions to Improve Students' Cognitive Abilities - Rhoda Koenig

I'll finalize my Sunday and Monday schedules after I've had a chance to get the inside scoop from conference vets. 

Assuming wifi is working, you can follow me on twitter. I'll be using the #ASCD11 hashtag for updates.

Monday, March 21, 2011

Why Teachers Like Me Support Unions

"Hey Jason, you must be pretty interested in what's happening in Wisconsin."

"Yeah of course."

"Good that people are finally standing up to the unions."

"Yeah, it's......Wait......what?"

"The unions. You always seem pretty anti-union."

"Why's that?"

"Well you're always complaining about how they're getting in your way."


I can't deny that last statement. I do. I complain about my union regularly. I've fought with them, actively or passively, numerous times over my career.

But I support unions and I especially support my own union. As soon as I saw Stephen's post about edusolidarity I knew I'd do it.

I have this go-to phrase. I use it all the time when I'm making a decision. In fact, it overrides most other considerations.

"It'll help our students."

When I hear it, I can't say no.

Three years ago, we never filled an SDC class position. Instead of hiring a substitute, I agreed to roll the SDC class into my newcomers (non-English speakers) class. I did it because I thought it would help our students.

Last year, we had an open position in science. I alternated teaching every single section of science through the week while also creating lessons for the rotating subs. I was responsible for more than 300 students for a few months. I did it because I thought it would help our students.

This year, we had two hours a week added to our schedule. No warning. No increase in pay or other types of compensation. I went along with it. I did it because I thought it would help our students.

And this is where my union and I fight. We disagree because I am only thinking about my students. The union? The union is thinking about me. They're protecting me from me. I won't can't say no. I keep pushing and pushing. My union pushes back. They tell me that the district could fill that open spot. We shouldn't have our schedule arbitrarily lengthened without something in return. They tell me to hold firm and the school will do what needs to be done.

So we fight. And I complain.

And my friends and family only hear the "unions are bad" narrative.

But without my union I would not be here. Without my union I would have burned out long ago. I see 150 kids without a teacher and I don't think, I just act.  The union kept pressure on, and the next year, we found an SDC teacher. The union didn't let my school forget about the open science position and so we had a new teacher by January. The union helped us get some of that added time back for staff collaboration.

My union thinks about me so I can think about my kids. I support my union because I can't support myself.


Friday, March 11, 2011

One Test to Rule Them All

Nearly 60 years ago, an anesthesiologist named Dr. Virginia Apgar devised a simple test that all parents are familiar with. The Apgar test looks at five criteria to quickly determine the health of a newborn. A healthy newborn will generally score between 7 and 10.

The story of Dr. Apgar's test is an important lesson.

On the face of it, the Apgar test is ludicrously simplistic. In theory, a baby could have no pulse but still score a perfectly healthy 8.

But the Apgar test is good enough. And more importantly, it's better than nothing. Because that's what doctors used to have.

In the book Better, my man crush Atul Gawande states, "the score turned an intangible and impressionistic concept—the condition of new babies—into numbers that people could collect and compare (p. 187)."

Basically, doctors now could tinker and gather the results to see what worked. There's a lovely passage about how doctors are supposed to be "evidence-based" but in obstetrics the doctors just tried stuff out and looked to see if results improved.

They had a number and if it went up, they knew they probably did something good. If it went down, then back to the drawing board. Before the Apgar test, 1 in 30 newborns died at birth. Now, it's 1 in 500 (p. 187).


Gawande also points out that the number of Cesareans has increased dramatically. In part, because of that all important Apgar score. He uses the phrase "tyranny to the score" here, which I love. Doctors, being the type A overachievers that they are, seek to maximize their score as much as possible.  Sometimes they do so at the expense of other considerations. Again, quoting, "While we rate the newborn child's health, the mother's pain and blood loss and length of recovering seem to count for little. We have no score for how the mother does....(p. 198)" Being the good teacher doctor that he is, he goes on to suggest multiple measures.

So what do I take from all this? I used to spend hours creating and revising the perfect assessment. I'd stress about word choice. I'd wonder if I was giving too much away or not enough info. I'd try for that perfect balance of academic vocab and accessible language.  I'd try to cram in 60 questions in 50 minutes or have them write a full lab report in complete silence.

Except that there aren't any perfect assessments. What's perfect for Student A is highly flawed for Student B. It does not exist. But there are certainly better and worse assessments.  So like the Apgar test, I try to make my assessments good enough. Multiple "pretty good" assessments give a more complete picture than any single "great" one possibly can.1 And NEVER EVER let any one score dictate everything.

1: Falling in love with a single assessment, lesson, lab, demo, whatever... is one of the cardinal sins that very good teachers make. I know I can become enamored with a lab and stop being critical of it and working to improve. I start to think it does more than it actually does. And yes, I try to work to keep my SBG love in check.

PS - I've gotten more out of the Atul Gawande's books than any other of the "not in education" gurus we end up reading, like Dan Pink, Malcolm Gladwell or Jim Collins. I keep meaning to blog about the chapter he wrote on cystic fibrosis care. Best chapter ever. Go to the library this weekend and read it. The chapter is titled The Bell Curve and it will burrow deep into your brain.