Friday, November 11, 2011

One minute survey: Grade This!

Out of 100 points, what score would you give this unit test?

(Edit: In hindsight I designed this survey poorly. Most of us assign points ahead of time for each question and then tally it up. I've changed the setup to reflect that so you only see the content and format first and then the results later. I'll mark on the spreadsheet where I made the change.)





















You can use any criteria or scoring system you like.

The first three sections are testing content and/or skills that have been directly taught and are similar to something a student could find in notes, exercises in the book, homework, and classwork.

The essay is asking for a novel application of the content or skills that were directly taught. The question/answer would not be found in the textbook or notes but could be answered given sufficient mastery of the course material.

I realize this isn't a math-friendly format but if you're a math teacher, pretend 1,2 are the easy questions on a test. 3 are the harder ones. For 4, think "problem", not "exercise."


Results:



I've seen something like this in books and various conference sessions and have been wanting to try it. Don't leave any comments about your score just yet. I'll post results in a few weeks and hope to include it in a presentation for my staff. I have no idea if you can see the form at the bottom in a feed reader or on a mobile device. Sorry in advance. Thanks for the help.
 

Sunday, November 6, 2011

Layering

Overall the class goes something like this:

  1. Pose a problem/Show something/Do something
  2. Do some experiments and figure stuff out
  3. Name/Practice/Refine/Elaborate 
  4. Reflect
  5. Do something with what we've figured out
  6. Break model and go back to step 1 (ideally). Less ideally but more commonly: Start over with something tangentially related but its in the state standards and so I need to force a connection.
There are some mini-cycles (epicycles?) embedded when we get stumped or someone asks a really good question but that's basically how things go.

So what's the layering part? Science teachers like to have these process versus content arguments. I know this happens in other subjects too, but we really enjoy a good argument about the value of scientific thinking and skills versus content knowledge. My own thoughts on this argument are tangled but I will tell you that if you're planning on only teaching process or only teaching content, you're actually teaching neither. Now I'm going to appear to contradict myself and add that if you're teaching both at the same time, you're not going to be satisfied with your results for either one.

The layers:

(Layer 1) In step 2 of the list above, students are playing the whole game. They're working with multiple factors. They're trying to decide what's important and what's not and making on the spot choices. They're messing up and messing up again. We're developing both content and process at the same time.

(Layer 2) Step 3, we pull out the content and we address it separately. What do we call what we just figured out? Here's some vocab. You just figured out how to calculate speed. Let's practice that now.

(Layer 3) Step 4, we go back and reflect how we developed our content knowledge. How did we solve the problem? What tools or skills did we develop? If it's something we're going to use again, we name it so we can refer to it later.1

(Layer 4) Step 5, back to playing the whole game. We've hopefully developed our content knowledge and process skills to a point where we can use it for something a half a step higher. For example, constant velocity collisions instead of just determining speed.2 The level of difficulty is crucial here. I tend to go too hard and they're back to trying to figure out new content/tools rather than having an opportunity to put together what we've developed.

I have no idea if layering is the right term, but it's how I picture it in my head. Actually I picture it as a stacked bar chart like this:

layering

Sometimes, I'm embarrassed by how nerdy my brain is. My nerdy brain also needs to reassure you that those percentages are just approximations.

In written form it appears cleaner than it really is. There's a lot of overlap. It would be more accurate to say each layer has a different emphasis rather than truly divorcing content from process but it helps me to think in those terms.

I don't have any research to back any of this up. What I do know is that I've had the most success when we can develop process and content at the same time, separate them out to work on them individually, and then put them back together.






1: This layer is a glaring weakness for me. I never put enough time into developing this step as I'd like and I haven't yet figured out any solid moves beyond standard reflection types of things. 


2: My credential is in physics so I feel I'm much better at developing these culminating activities for the physics portion. For chemistry it's never as satisfying. Mainly, "predict what's going to happen," or "why doesn't this behave like our model predicts," kinds of stuff. Note to self: Hang out with more chemistry teachers.




For the non-bloggers: I wanted to write a post about the big picture of what/when/how assessment occurs in my classroom. In doing so, I realized I needed context first so I wrote The Cycle. Now, 4 revisions later, I realize I need more context and so you get another post. It's like I mentioned in the last post about Mylene going down the rabbit hole. I would consider myself a reflective person, but blogging has forced me to take those reflections and make them concrete and semi-coherent. If you don't blog already, do so. Even if you never plan on anyone reading it. 


Tuesday, November 1, 2011

Deserves More Traffic

Here's November's edition of Deserves More Traffic. My inclusion criteria is that 1) Your blog is awesome and 2) You have less than half the Google Reader subscribers that I do.



Teachers who teach teachers


John Golden and David Coffey are colleagues at Grand Valley State University and teach future math teachers. John's got a lot of cool stuff going on with games. David had a recent series called Now What on extending learning through student generated questions.

Brian Frank teaches future science teachers at Middle Tennessee State University. The stuff on misconceptions at his old blogger site is brilliant.

The loneliest Google Reader folder belongs to

Mylene, who teaches electronics at a technical school in Eastern Canada. That is pretty cool all by itself. Befitting an electronic teacher, she is constantly tinkering with how she does things. Her posts on Reading Comprehension were particularly compelling. It wasn't so much what she was researching or trying out that I found most interesting. It was that examining this narrow slice lead her down the rabbit hole of questioning a bunch of other areas of her teaching. Every (good) teacher goes through this multiple times and it was fascinating to watch it happen in real time.


Finally, a couple of new teachers.

Daniel Schneider blogs at Mathy McMatherson and Molly Kate blogs at Mathemagical Molly. Both are high school math teachers although in very different working environments. I lurve me some new teachers. New teachers blogs spill over with angst and frustration and hope and wonder and I love them so. The downside is I get that "parent whose daughter is going to be out past midnight for the first time" feeling when a new teacher goes more than a couple of weeks without posting.


Hi sweetie, just checking in. 


Are you OK? I'm here just in case. 


Why aren't you responding to my texts? Hello??? 


Text me back right now so I know you're not lying in a ditch somewhere!!!




Ok. Go visit their blogs and learn something new. I'd say add them to Google Reader but we're fighting right now and I'm not ready to make up quite yet.

Thursday, October 20, 2011

Helping Students Give Feedback

Recently, Frank posted a picture of how his students correct their own tests and John posted a sample of the feedback his own students give themselves. This provoked a lot of discussion among my online colleagues and one thing that came up was how it is difficult to get students to leave good feedback.

I noted in the comments of John's post that I found it really interesting how his students were mainly leaving reminder notes and questions for him to answer. In this case, I'm talking about feedback in a traditional teacher-directed sense.

This is something I've really been focusing on this year. It's definitely been rough. I attribute this to the lack of quality feedback students normally get from us but that's another story. I wish I was friends with more English teachers because this is their bread and butter.

The whole thing is pretty standard but as usual, I buried my big insight. It's Key Point #2 if you want to skip ahead.

First, I narrowed the scope. I tried to teach them to leave feedback for only one type of question. In the topic on Atoms, we focused on the explanation questions—how you can explain different phenomena through the motion of atoms.

Second, I went through the process of how I'd evaluate those question types and wrote out a flow chart. It looks like this but you can certainly just do it as a series of questions or a checklist.



feedbackflowchart



I drew it up on the board but you get it nice and typed. For the science folks, at the time we used the words atoms, molecules, and particles interchangeably.

Students are supposed to go through the flow chart and then write their feedback based on the chart. I gave them a sentence frame where each step they pass is a positive comment and when they hit a "no" they step back and write the improvement step.

For example, if you got to the second "no" you'd write something like, "You wrote an explanation for why water boils when heated. Next time your answer needs to mention atoms, molecules, or particles." or "You wrote an explanation for why water boils when heated and included atoms. Next time your answer should include how atoms move."

After that it was pretty standard. We used some generic sample answers to try as a class. Then they wrote peer/self feedback on some of their previous answers. I had them predict some things they hadn't seen yet, like sticking a balloon in a freezer and they traded and wrote peer feedback as well. Each time we wrote a set of explanation questions, we'd do this for at least one of the questions.

After a few tries at this I added another step in the flow chart at the end, "Does the explanation connect the motion of the molecules back to the observation or prediction?" This was much more difficult for kids to get than the first three but it is also a much harder skill.


Two key points:

  1. My real goal here is that they do this enough times and when they're writing their own explanations they have the flow chart running through their head. Is this an explanation? Did I talk about atoms? Did I talk about how atoms move? Did I explain how that's related to what I observed/predicted?
  2. What's really important is what's missing. I didn't have them leave feedback for correctness. This is where kids usually get hung up on feedback. If I don't know the answer myself, how can I give that kind of feedback? I can't tell where you went wrong if I don't know what the right move is. What I want them to look at is the quality of the explanation itself. Every kid can look at a written explanation and decide if it has certain qualities. It is important for a student to understand that he or she can be factually incorrect but can still give a quality explanation and vice versa. These are two different skills and we are going to improve both of these. This is the part I feel like I got right. Student feedback can't depend on the level of content knowledge. 



I'm giving this a tentative endorsement. The higher level feedback is still up to me but this has definitely helped move the lower and middle level responses up a notch.


One teacher-bonus I noticed from doing this is how dumb it is when I ask a kid to, "Check your answers." Yes, sometimes there are careless errors they can catch. Mostly, if they didn't know it when they answered the first time, they still don't know it. (Actual quote: "I checked it. I still don't get it.") I really need to do a better job of teaching my students different methods for verifying an answer.

Tuesday, October 18, 2011

The Best $5 You'll Spend This Week

John Spencer is having Reader Appreciation Week. You can get each of his books for $1 on Kindle. He's changed positions recently but I still think of him as a sixth grade teacher.

There's something for everyone. He's got a book with help for new teachers (Sustainable Start), YA lit (Drawn into Danger), edtech satire (Pencil Me In), and general thoughts on education (Sages and Lunatics & Teaching Unmasked).

For full disclosure, I consider him a friend in the "I've never met him face to face but interact him with online" category that most of my teacher-friends now fall into. However, he didn't ask me to do this and, in fact, wouldn't even consider asking.

Note that you don't actually have to own a Kindle to read them. You can get the apps for your mobile or download the reader software for your desktop or laptop.


(PS - I didn't forget about the assessment overview I promised in The Cycle but I have no idea how I'm going to translate this crazy bubble-flow-mind-timeline chart thing I drew into a blog post)

Friday, October 14, 2011

Test Deconstruction

I used to pre-test my students. There were two main purposes. I wanted to use it as a diagnostic to see what they already knew and I wanted to focus them on what was coming.

As a diagnostic, the test was pretty useless. All my students come in with essentially zero content knowledge of what we're going to learn. I mean, a few might be able to shout out a half-remembered vocabulary word, but I haven't had any students that can go beyond that. I get the information I really need from whatever I use to launch a topic. The ball and hoop demo for when we learn about atoms or just having them predict what will sink or float when we start on density/buoyancy.

In terms of focus, well, that didn't work so great either. They'd fail their way through the pre-test and since they had zero pre-exposure, none of what they saw on the test would stick. They didn't have anything to anchor it with.

Now I go with test deconstruction. In the cycle, this happens after they've run their own experiment and teased out the big ideas.

I give them a copy of a test I'm planning to give them. Same format. The questions aren't identical but they're testing the same standards. In their notebooks, they draw four columns. The first column is just the letter of the standard, which is listed next to the test question.

The second column is, "What do I need to do?" They should write what is actually required in the question. Do they need to label a diagram? Explain something? Draw a picture? Fill in the blank? They have a copy of Costa's questions in their notebooks to help them along. I picked up this page either at an AVID conference or somewhere online. Box.net doesn't like the font I used but you get the idea.



The third column is, "What do I need to know?" and the fourth is Key Vocabulary.

Obviously identifying the content knowledge and vocabulary required is an important reason for doing this, but there are two other things I'm trying to accomplish.

Number one, I want them to understand how fundamentally different this question is:

atomslg4.pages

from:

A. Draw and label an atom.

or:

B. Select from the word bank and label the diagram.

or:

C: List the subatomic particles of an atom.

In the first example they'll need to know the parts of an atom, where they are located, and what those little symbols might mean. They're not asked to do the actual drawing from memory as in example A (this comes later on when we start with the periodic table). In B, they're only asked to recognize, not memorize, the names and know the locations. In example C, they're only asked to remember the names but not the locations or know what those symbols are. However, they are required to know what "subatomic" means. These are different questions that will require different levels of knowledge and different skills. More importantly, they need to prepare for these differently. I can't just tell a failing kid to "study." They don't know how to study. It's something that needs to be taught.

Number two, student friendly learning goals! One of my rules for teaching is don't do the thinking for the students. If your learning goals are in teacher language ("Identify and label the subatomic particles of an atom") and you translate that for them, you're doing the thinking for them. I want my kids to know what "identify", "label" and "subatomic" mean. Why in the world would I translate that for them? The columns translate easily into a learning goal for each standard and now when I say, "Today we're going to work on identifying and labeling the subatomic particles of an atom", they've already got something written up to decode what that all means.1


They write them up in their science notebook next to that topic's table of contents. Whenever we add something to the TOC we can refer to the learning goals at the same time.


1: Now is not the time, but at some point you'll need to remind me to preach for judicious use of the learning goal. 

Friday, September 23, 2011

Connections addendums

In the original post, I forgot to include that I'm going to add pictures along with the storylines when it goes on the board. I'm a big fan of visual anchors. For this one I'm including a picture of the ball and hoop, a picture of the actual superhero Flash (probably not the shirtless Ryan Reynolds one, wasn't he the Green Lantern anyway? I'm confused.), and a photo I have of a student running her experiment.

After posting to twitter, I quickly got these two tweets:

Twitter / @Mr_Martenis: @jybuell Connections are n ...

Yeah. That's the goal. My original plan was to have my students fill these in as we go but then I realized I have no idea what's coming up so I couldn't really make the actual graphic until the end. I certainly know what the goal is but I only vaguely know how we're going to get there. This time I just walked them through the first couple of blanks and then left them to flip through their notebooks. It went fine but it would probably work better if I put together a graphic organizer they can fill in along the way and then translate over.

Along a similar vein:

Twitter / @emwdx: @jybuell very cool - poss. ...

A solid idea but I've got a completely analog class. I can get my hands on a projector maybe once every two weeks and there aren't really computers available for student use. Also, I like having it up all the time and being able to constantly point back to how we got to where we are. I do think this would be cool though and if you do it, let me know.

Connections

I stumbled onto this post about Learning Journeys the other day and it reminded me I totally forgot to blog about something I'm trying this year. First, a visual.

connections


I'll post the link at the bottom of the page in case you're having trouble reading. What is it? It's a storyline of the things we've been learning. You start in the upper left and follow the arrows. Here's a zoomed in version of the bottom left corner.


connections2
Any blank line they're supposed to fill in with a sentence and the boxes should have drawings of the particles.

Here's the part I'm proud of. Do you see that half-line on the middle of the left side of the page? We are going to make another one in a few weeks and connect it to this one. My goal is to connect the entire year together. If I was more organized I could have plotted it all ahead of time and figured out how to get multiple connections (and back to the beginning) but that will just have to wait.

We'll have an ongoing record of how we got to where we are and I'll fill be able to fill up one of those blank bulletin board spaces that my principal is always yelling at me for. Win win!




Here's a keynote 08 version (zipped because I can't figure out how to get dropbox to recognize keynote files) and a pdf. Don't forget to print it without borders.

Edit: Additional comments in next post.

Wednesday, September 21, 2011

The Cycle

Subtitle: The one where I defend cookbook labs.

Going back over my old posts I realized I don't really talk about what my class looks like and where this SBG stuff fits in. My teaching is pretty generic and probably looks like any science class in your school.

I'll do this in two parts. In the first part I'll describe my standard teaching cycle. In the second I'll describe where all these assessment/testing/checklists/etc fit in.


Right now we're in the middle of our first topic on Atoms and so I'll describe that one.


Most of the units start off the same. We're going to try to derive the big idea. For this one, I want the kids to figure out that atoms move around. When they're heated they move faster and spread apart.

I start with the ball and ring demo. Unlike Maestro Hewitt, I start with the ball. I take predictions. Do it and then have the kids brainstorm different explanations for why the ball gets bigger when it's heated. Every year the same three explanations are the most popular. We record the less popular ones for later. Explanation 1 is what you guys and gals will recognize as phlogiston. Heat fills up the ball and makes it expand. This year, we called it the "water balloon model." Explanation 2 is some variation on the ball is made of atoms. When the atoms are heated they expand. It's basically the water balloon model for individual atoms. Last year a student referred to it as the Hulk model after the Incredible Hulk and I stole that name for this year. Explanation 3 we named the Flash model (in honor of the guy that's really fast) after the previous student's inspired Hulk model christening. In case you're curious, phlogiston is by far the most popular explanation.

I realize I'm stealing a bit of the thinking by keeping the names this year but I love them so. Oh and I always ask kids to explain what atoms are (or molecules, particles, whatever 5th grade vocabulary word they throw out without really understanding). I've yet to have a student define it beyond "little thingies" so usually we just go with "little thingies everything is made of" until we can drill down on a definition.

After that I toss a bunch of cookbook labs their way. Yes, I said it. Cookbook labs. I heart me some cookbook labs. I'm a fan of analysis. Some teachers are big on the front end of designing an experiment. If I had to choose, I'd spend a day planning and executing an experiment and four days discussing the results instead of vice versa.

So I tell them what we're going to do and they need to be able to explain what the various models would predict would happen. This is rocky and takes a ton of modeling on my part plus I give them sentence frames to start off the year and then fade them out.

First two things we did was take the ball and get the mass before/after heating. We did the same for some ice water that we warmed up. We compiled the class data, pushed all the desks to the side, circled (actually ovaled) the chairs, got out our big whiteboards, and shared what we thought was going on and which models were supported so far.

Next we dropped dye in hot and cold water and dissolved sugar in hot and cold water and then got in the circle again. This was Labor Day week so it took all four days to get all this done (50 minute periods).

On Monday, after some whole class discussion recapping the evidence we've gathered so far, students spent the period designing an experiment to either test one of the three models or one of their own that we didn't address from way back in the original brainstorm. Tuesday they ran their experiment. In the beginning of the year these are usually pretty derivative of one of the four cookbook labs we did. I'm ok with that. There was a lot of dissolving candy, heating and massing of various metals, and switching in different liquids for the dye/sugar labs.

On Wednesday, we circled up and they shared their experiments, results, and what model they think is best supported by the evidence.

From there, I get in to the talking/notes/demos/phet stage where they're getting actual names for things and deliberate practice. We refine the model a few times along the way but the bulk of the work has been done.

And that's pretty much how all the topics go. We figure out the major ideas through cookbook labs. They design their own experiment to test those ideas. We discuss. I drop vocab and some practice at the end. Pretty standard stuff.

Some topics are more lab-based. I do the density/buoyancy topic almost completely through experimentation and just give them the words at the end. Same for the physics stuff. On the other hand, my periodic table and astronomy topics are pretty weak. It doesn't get too far beyond "look at this and tell me if you notice any patterns" and then a whole bunch of me telling them stuff (while I'm not necessarily standing in front of the class lecturing, it amounts to the same).

Other things to know:

For whiteboards they always need a graph and I insist that the graph has no numbers. I just want them to label the axes and draw a line. I'm a big fan of this. They're too used to mindlessly plotting points and connecting dots and not focusing on what relationship the graph is showing.

I don't know how John gets that great discussion going. Mine are basically just report outs and I force some responses by asking questions of other students (Do your results agree or disagree with what she just said?). Most of my students are English learners so I give them what is essentially a fill in the blank paragraph ahead of time. They can't read it out loud though. It's mainly to help them sequence their thoughts. I also give them the questions I will ask ahead of time so they can prepare. I use my index cards to randomly call on one person in the group to give the main presentation and then other students to answer the questions. Having students prepare together in groups and then randomly call on one of them is a consistent feature of my class.

I used to be pretty crazy about formal lab proposals and writeups. Now I play it fast and loose. I really don't need to see a step by step procedure with every step starting with a verb. Yeah, I was that guy. But I'm really big on students being able to tell me what the various models predict is going to happen. I don't want what they think will happen. I really try to push thinking in terms of what the explanation you built says and then interpreting your results based on those predictions. If there's anything you'd say I'm pretty strict about, it's that. Yes, I realize that's standard hypothesis testing. But I stopped saying "write a hypothesis" because whenever I did a kid would automatically assume I wanted him to guess about what was going to happen and then the experiment was to test if he/she was right or wrong. I die a little inside every time a student writes, "My results were XYZ so that means I was right."


Hopefully you've got a sense for my class. I've got to explain test deconstruction first, but the post after that I'll point out where the informal/formal assessments fit in to the cycle.

Addendum: I only defend cookbook labs as a way to quickly generate data to analyze. I am against "Ok, I just taught you that molecules spread out when objects are heated, now go drop this dye in water and confirm what I just said." That's where cookbook labs get a bad name.

Thursday, September 8, 2011

Information about the California Standards Test Part 2b

Part 1
Part 2

I haven't said anything about the students. This belongs in bold:

Do not make any placement decisions solely based on test scores. 

I know you do it. We do too. In fact, denying students an elective based on their state test scores is one of the joys of being in Program Improvement. But the state of California, perhaps in word but not deed, agrees. From the Post-Test Guide....and I quote:
Any comparison of groups between years should not be used for diagnostic, placement, or promotion or retention purposes. Decisions about promotion, retention, placement, or eligibility for special programs may use or include STAR Program results only in conjunction with multiple other measures including, but not limited to, locally administered tests, teacher recommendations, and grades. (page 13)
and then on page 14 they give almost the exact same quote directed towards individual students in a bolded and boxed callout:
Decisions about promotion, retention, placement, or eligibility for special programs may use or include CST or CMA results only in conjunction with multiple other measures including, but not limited to, locally administered tests, teacher recommendations, and grades.
WAIT!!! One more because this bears repeating (also on page 13)
While there may be a valid comparison to be made between students within a grade and content area, it is not valid to subtract a student’s or class’s scale score received one year in a given content area from the scale score received the previous year in the same content area in order to show growth. While the scale scores may look the same, they are independently scaled so that differences for the same students across years cannot be calculated using basic subtraction. 
Summary: You can't use test scores as the sole method for placement AND you can't use them to determine growth year to year. Wait....but what if a student scores Proficient one year and Basic the next? Shouldn't we put him in a second math class and catch him up? Noooooooooo.  Here's the easiest example why.

2nd grade - 62%
3rd grade - 65%
4th grade - 68%
5th grade - 60%
6th grade - 52%

What's this? This is the percent of students in California proficient in math in 2010.

Either: California sticks all of its best teachers in 4th grade. (unlikely). Approaching puberty makes students universally worse at standardized tests (plausible). The test is harder (that's what this study argues).

Here's the percentage of kids scoring Advanced:

2nd grade - 36%
3rd grade - 38%
4th grade - 42%
5th grade - 29%
6th grade - 23%

(7th and 8th grade we start tracking the kids more. At many schools, advanced 7th graders end up in Algebra and then Geometry in 8th. Some kids even take Algebra 2. A direct comparison is harder).

So a student drops from Proficient to Basic. Why? Hard to say. The test looks like it certainly got harder. With standard error and such he could have really been Basic last year. That's not to mention the inanity of deciding an entire year of coursework based on one 60ish question multiple choice test taken 80% of the way into the year. The test wasn't designed to make those kinds of inferences and you're left with too much doubt about the cause of the drop in score (or whether there even truly was a drop).

I didn't include anything about useful information about students because there isn't any. Well, there almost isn't any. You could probably tease out some information in conjunction with local assessments but really, what's the point? Right now I can tell you that Alondra got 9/13 correct in the "Functions and Rational Expressions" strand of her 7th grade math test. In order to figure out exactly what gaps she had, I need to give her a local diagnostic anyway. And that's really the whole point. If Alondra got an F in the class, I'm more concerned about that than her scoring Proficient. But for some reason, we think that because she scored Proficient on a single day of the year, that's more important than the other 181 days she was floundering. We let a single test override an entire year.

The hours that your school is going to spend dealing with this stuff could be spent doing something useful. You know, like standards-based grading.

Yeah I know. I came back to it. But really, isn't the complete and utter nonsense that is our current grading system a major reason for needing a standardized test in the first place? Right now we look and say, "Oh noes! Jason is basic this year! We've got to get him in extra math!" We have to do that because a B in Mr. X's class is not the same thing as a B in Mrs. Y's class. We don't trust that Jason getting a B in math means anything at all. But really, if we trusted the grades we could say, wait, Jason learned X, Y, and Z. He's fine. Or he still needs to learn Z but that's not going to require us to take him out of art or music the entire year.

If there's a moral here it's that we try to make the CST more than it is. The state of California does an awful job of communicating what the state test really does. I don't really think it's in the state's best interest to do a good job though. If more parents/students really knew what decisions were improperly being made off these score, the state would have a lot of 'splainin' to do. I also think from our end, because we have to devote so much time and energy towards it, we feel the need to justify that.

I've gotten some valuable information out of the test. I've revamped a couple of units. I've gone and visited a few really good science teachers and programs. That's fine. But you're not going to get much more than that. Invest your time and energy into something more meaningful. Oh and never place kids solely based on test scores. Just don't.



Full disclosure: I'm not against standardized testing. I find it useful to have a third party to calibrate my class against. I live in absolute terror that I'm teaching down to my kids. It helps me set a baseline. What I am against is the high stakes part. Even disregarding the punitive aspects of the test, the high stakes nature (and subsequent fear of cheating) has caused California to hide any useful information we might get out of these things that take up so much of our time.

Wednesday, September 7, 2011

Information about the California Standards Test Part 2

Part 1

(You have my permission to skip this post. It's not all that interesting unless you're curious about the lengths we have to go to in order to extract any useful information out of the CSTs)

In the previous post I wrote about the most common ways I see our state test scores interpreted. You can't make a straight comparison of API year to year and you can't use it to track student growth from year to year. The quote from page 6 of the technical guide:
When comparing scale score results from the CSTs, the reviewer is limited to comparing results only within the same content area and grade.
The hard part is they're not *really* designed as anything more than a school report card. (Poorly designed, that is)

In this post we're going to see the two pieces of useful information I've been able to extract from the CSTs. Prepare to be underwhelmed!

You can't compare across grade or content, but you can compare districts/schools/teachers to each other as long as its the same grade/content/year. Basically, you're down to comparing teachers who teach the same content at your own school and comparing departments in different schools.

We don't get item analysis or even standard analysis. The smallest grain we get is by strand. For science, there's six. If I'm reading the Algebra test right, you get a whopping four strands. You'd think you could compare how your students did year to year by strand, but alas, California doesn't norm the strands either so you have no way of knowing if your students improved or if the strand just got easier. You can compare the percentage of students who were proficient in that year's strand to the mean California percentage. I've found it moderately useful to look at the difference between the two and look for big gaps. The problem of course is that California won't release enough detailed strand information for you to really tell what's going on. So my kids perform poorly in "Number Properties" or "World History and Geography." Lots of help there. You might as well tell me to "teach better" and expect results. Oh wait. That's exactly what happens.

I have, however, gotten good results from comparing strand information between teachers at your school (or district if you're lucky). The key to this one is acknowledging that every teacher's class makeup is different. At my school, we send certain types of kids to certain teachers. Even if your school does scheduling randomly, chances are in any one year those classes aren't going to be balanced between teachers. If you acknowledge that upfront, you're more likely to get honest conversation rather than people being defensive about their test scores.
cst2010 (version 1).xls

Here's what ours looked like. We mostly tracked each other. However, the blue teacher seemed to teach the Solar System section better. I'd probably also say the blue teacher performed a bit worse on the Chemical Reactions section. So that's a good start to a conversation. "Hey Blue Teacher, take us through your Solar System unit." This is pretty fast and worth it.

The other useful information I've gotten is comparing schools. Why compare schools? I visit a couple of schools each year to go observe. An easy place to start is with those schools with excellent science scores. This is a tricky one because demographics are so huge here. I've got three years of test scores for the other middle schools in San Jose and the percentage of students on free and reduced lunch and percentage of students who are English learners both correlate about -0.9 to every 8th grade test score. 1.0 would be a perfect line and 0 is a random scattering. 0.9 is really really close to a perfect line.

I've gotten some good information out of graphing that relationship (science score vs. free and reduced and vs. %ELL) and looked at who stuck out. I visited 2 of those schools and exchanged a few emails with another. Unfortunately, it takes awhile to gather that information and it's not all in the same place. You can cheat a bit by going to the CDE website and looking at what schools are listed as the similar 100 schools to yours. If nothing else, it's pretty interesting what the CDE considers "similar." Spoiler alert: They don't seem very similar to me at all.

A quicker way is to just gather a few of the other subject area tests and compare them. Again, you're just looking for a break in a trend. In 8th grade, you can take the History, Science, and ELA scores. Math is trickier because some kids take Alg, some take Gen Math, and some take Geometry (and a very few take Alg II).1


cst2010 (version 1).xls

The Red Line shows the state averages. The Yellow Line shows the county averages. You can see they track each other almost perfectly. Light blue is my school. Nearly every school I've looked at is reasonably close to that nice straight line. Yeah, we're a little better in science than expected but before I start bragging check out the purple line. They kick butt in science (also shown when I graph vs % free and reduced). The light green school? If I'm a social studies teacher I'm going to go visit that school. As a side note, that light green school actually has a population that's 100% on free and reduced lunch and close to that for % ELL. Every other score across the entire school lines up along expected values. If I'm the principal of that school I'm going into the history classes and figuring out what the heck they're doing and doing whatever I can to spread that. If I was a history teacher at a different school, I'd spend some series time observing those classes.


(wait! This got too long. I'm going to split this up and talk about student placement in post 2b)


1: If you wanted to, you could do what California assumes and bump the Gen Math kids down a level (only count Advanced as Proficient) and then maybe take the Geo kids and bump them up a level. You might get a good approximation. 

Sunday, August 21, 2011

Thoughts on EdCampSFBay

I went to my first EdCamp yesterday. Here's my review. (Note: Re-reading this, it sounds overall negative. It's not. It was a mostly positive experience. Just keep that in mind)

The Good:
In a Shifted Learning podcast, Jen Orr spoke of the ISTE conferences as foremost a place to connect with passionate people. I'd say the same thing about EdCamp.

I stood around at lunch talking with Paul Oh and Erin Wilkey Oh from the National Writing Project. We didn't talk about the NWP at all, but as soon as I got home I went home and started browsing the website because if thoughtful people like Paul and Erin were there, I want in.

I spent most of the day hanging with Frank Lee and his colleague Karen, who were up from LA. I'd love to be on staff with them.

Anytime you can hear Dave Orphal say anything (and he says a lot) take that opportunity.

I met Tim Monreal, Jeff Silva-Brown, Catlin Tucker, and a bunch of other folks.

Everyone was just good people.


The Bad:

The sessions were eh. They ranged from moderately interesting to downright terrible. I know, I'm supposed to vote with my feet but the other options didn't really interest me either. At one point I tweeted this:
Twitter / @jybuell: I don't think I'm the targ ...
I don't hold this against EdCamp. I'm not the target demographic for anything edtech and I realized this when I signed up. There were sessions on flipping the class, ipads, collaborize, live binders, and all sorts of things like that. Now, collaborize and live binders both look pretty cool, but tech is like 13974th on my priority list. (edit: I'm adding class dojo to the tech list. Also looks interesting) It ranks somewhere below teen pregnancies but above Accelerated Reader. Even if I thought flipping the class was a good teaching model (I don't for most things), more than half my kids don't have internet access at home. I sat in a session with a guy claiming tablets would completely take over education in "18 months to 5 years." My school is still mostly on overheads and we've got maybe 5 LCD projectors for the whole school.1  We're definitely on the far end of his time frame. The far, far end.

Honestly, if I could buy two iPads or a set of new chairs for my class (a reasonable cost comparison), I'd take the chairs in a second. It's not that I don't think tech is useful, it's just that I'm not on the same level of Maslow's hierarchy of school needs as most of the EdCampers.

Although this is something you'd see at any place that attracts the EdTech crowd, the low turnout really made it worse. I'd guess we had 40ish people there and slightly more than half of those were classroom teachers. I had 3-5 session to choose from in each time slot. Apparently EdCamp Boston had 300 people turn up. I don't know the deal with us. Bad marketing? Looser knit community? School just starting for most of us probably didn't help.

There's an Ed Camp for social studies in Philly next year. That could be really cool.

The Ugly:
It's not widespread, but there's definitely an ugly undercurrent I don't like both at Ed Camp and within the blogotwittersphere. There are three parts:

  1. There are some teachers who aren't interested in learning or getting better at all. 
  2. These teachers are old.
  3. I know this because they aren't on twitter/blogs/EdCamp/whatever conference I go to and whenever I show them Diigo/GDocs/blogging/my wiki they're not interested.
Look. I agree with number 1. 2 and 3 are a load of crap though and I need to do a better job of speaking up. I'm not going to address number 2 because we can all think of teachers at our school who are in their second or third decade of teaching who still kick butt every single day. 

As for number three—everyone has a different way of developing. Some teachers use twitter and blogs. I am one of those teachers. Others read books. Others talk to other teachers in...wait for it...real life. Others spend time finding primary sources to give their kids. There are some who spend hours a day reading children's books just in case a kid asks for a recommendation. I know a teacher who just this year has had a baby, finished her PhD, and is teaching two methods classes for science teachers at two different universities and you know what? She's never read my blog. I know. Shocking. At my school I'm notoriously resistant to any district-led PD and I can guarantee you there are teachers in my district that think I'm not interested in getting better. Different methods of access for the same goals. Hopefully that sounds familiar to the SBG people who are reading this. 

Second, just because I showed someone how teh awesome #scichat is and they didn't immediately sign up for twitter doesn't make them a crappy teacher.2 We just don't all have the same priorities. I have no interest in skyping another class. Why? Because I have to work hard all year just to get my kids to talk to the person in the next chair. And if I have to choose (and in school, you always have to choose), I'm choosing developing good relationships with the people they see every day. Just because you see the need to get all the kids signed up for a blog doesn't mean your colleague does too. And that doesn't make them a bad teacher.


Like I said, it's a small undercurrent but it surfaces every once in awhile. I hate it and I hate it that I don't speak up enough about it.

Final thoughts:

Now that that rant is done here's where I am. I believe in the EdCamp model. I think that's solid. If there were more diversity in the attendees, I'd be happier, but I'm not sure what can be done about that other than better PR. Since this is still a new thing I assume natural growth would occur.

Dan Callahan led a session on bringing the model back to your school. I was at another session so I don't know what he recommends. Me? I'd definitely love to have something like this instead of a district PD with an outside consultant person. I'd probably modify it a bit.

I teach in a K-8 district. Less than 10% of us don't have "reading" as an official subject to teach. I envision a crap load of reading strategy and ELD sessions. The ability to "request" certain sessions ahead of time would be helpful. You could have teachers each state something they'd be interested in and post them. Then we could all look at the list and hopefully there'd be a lot of, "Oh, I know how to do that." and then start proposing the session ahead of time. Teachers would still be free to move around from session to session if they liked but you'd get a wider range of teachers who'd be willing to lead a session.

There's another benefit to planning the sessions ahead of time. I think if EdCamp was mandatory (which it would be if it replaced a district PD) I'd be a little pissed if I prepped for a session and nobody showed up. Since EdCamp is voluntary, it's no big deal. If I was forced to go? I think that'd be different.

As for my recommendation, I'd go again next year, but wouldn't pay if they charged, nor would I go out of town and stay overnight. There aren't too many conferences I would be willing to pay for so that's not necessarily a knock on EdCamp. It just didn't blow me away enough for me to pay for room and board.

If you go, think of it as going for the people and because you believe in the model.





1: You know who buys iPads? The same schools that bought IWBs for each class. Which were the same schools that bought laptops for each kid 10 years ago and the same schools that had classes full of Apple IIe's in the 80s.

2: There also might be the just the teeny tiniest possible chance that I have done a crappy job of showing it off. That's probably not it though because the first time I show my kids the periodic table they immediately take to it and spend their free time pouring over it. If they don't, well, they're not interested in learning.

Friday, August 19, 2011

Bloggy type updates

I'll be at EdCamp SF Bay in Oakland on Saturday.

I'm also going to Dan Meyer's Perplexity Session on September 10.

Say hello if you're at either.

I've also updated my blogroll. I've come to terms with the fact that people actually read this thing so I figure I should use my powers for good. I streamlined the blogroll to focus on a few blogs that I think "Deserves More Traffic." This is the same thing Scott McLeod used to do with DABA except that I'm too lazy to interview people and those people shouldn't expect a traffic spike. My inclusion criteria is that 1) Your blog is awesome and 2) You have less than half the Google Reader subscribers that I do. I'll update it on a monthly-ish basis and will drop a note at the end of a post.

Currently on the list:

Mimi Yang at http://untilnextstop.blogspot.com/
Mimi would probably be classified as a resource blogger. Her materials are well designed and she's got some good twists on the old standards. She's also got a really interesting life. After finishing teaching in El Salvador she's now starting a new job in Germany where she's teaching 7th to 12th graders. The post on implementing the mathematical practices in the Common Core is excellent.

Organized Chaos at http://welcometoorganizedchaos.blogspot.com/
Organized Chaos is one of my favorite elementary bloggers. She teaches at a really great sounding school she refers to as the Think Tank. She's got some really thoughtful posts on ed policy but what really stands out for me is how much she loves teaching. Kindergarten Book Club is one of my favs.

Dan Finkel and Katherine Cook at http://mathforlove.com/blog/
Go look at the pic here. The money quote,"This picture, to me, is like a little image of what math feels like." You will then spend the next hour reading through Dan and Katherine's archives all the time wishing you lived in Seattle and could attend their workshops.

abrandnewline at http://abrandnewline.wordpress.com/
This blogger is a real life friend of mine and it's probably cheating to put her here. But really I love how she writes. She doesn't usually blog about the nuts and bolts but as much as anyone, she really captures what it feels like to be a teacher. Her end of the year letter gives you a good sense of what she's about.

Dan Anderson at http://dandersod.wordpress.com/
I usually think of Dan's blog as a place to find really cool problems. Going back over it now I realize he's got a bunch of other good stuff too. He's also my Python teacher and the master of Project Euler. So he's got that going for him, which is nice.

Grace Chen at http://educating-grace.blogspot.com/
I was talking about Grace with another blogger in GChat and the convo went like this:
Other person: Grace is so freaking smart!
Me: I know. It's like she's always 9 steps ahead and is patiently waiting for me to catch up.
Other person:Yeah, but she's so nice about it.
That's pretty much how it goes. She thinks about education at a different level than I do. This post on pseudoquestioning sticks in my mind.

Brian Carpenter at http://noninertialteaching.wordpress.com/
Brian started slow with only 4 posts in 9 months and I almost gave up on his blog. Then he started churning out posts in May and has really caught fire. His modeling posts are excellent and a recent one on teaching girls (he's at an all girls school) was full of goodness.

Stephen Lazar at http://stephenlazar.com/blog/
He's one of the very few non-math/sci bloggers in my Reader and for good reason. He writes about the way history (always my least favorite subject) should be taught in this post. He's also an important voice in the world of (sane) ed reform.


Thursday, August 18, 2011

Group Roles

I mentioned in the classroom management post that I liked group roles but I wasn't sure what I'd use this year.

I settled on these four:

Facilitator - This person makes sure that everyone understands what's going on and what they're supposed to be doing.

Resource Manager - In charge of the materials.

Process Recorder - Keeps track of how the group made decisions or arrived at conclusions.

Skeptic -  Looks for alternate explanations, things that might have gone wrong, or things the group might have missed.

The Facilitator is common so I'll skip that explanation.

I have a spot on my board for the Pre-Flight Checklist (Do Now) and materials. The checklist shows the things a student does right away along with the time they have to complete it. Usually it's three or four steps with stuff like answer the question on the board, open your notebook to page 58, process your notes from yesterday, finish the writeup from yesterday, etc. I used to just start with a question every morning but I found that to be too limiting. I like the comfortable routine of knowing what to do when you come in, but I don't need them to, for example, copy down a learning goal every single day. This gives me a little more flexibility while still maintaining some structure.

The materials section lists what students should have out immediately like notebook or portfolio. There's an additional section specifically for the Resource Manager. He or she gets the whiteboard markers, scissors, colored pencils, etc right away but doesn't distribute them until instructed. They get and put away lab materials when it is time and turn in papers. At the end of the period, the Resource Manager supervises cleanup.

The Process Recorder is new. My goal is to have this person record the discussions and decisions of the group. Why did the group make this decision? What particular piece of evidence led to the group's conclusion? I also wanted the disagreements and dissenters to be recorded.

The Skeptic originally started as the Double Checker but I decided that wasn't interesting enough a job. Their job is to make sure the group ruled out other explanations and to make sure the group isn't missing anything important. In my perfect scenario, the group would end up needing to devise an alternate or modified experiment based on something the Skeptic noticed. I'm not sure I have the ability to help them get to that point on a regular basis but you can bet I'm making it a BFD whenever it does happen.

Especially at the beginning, I give sentence frames and question cards to guide them as they're working. The Facilitator and Skeptic will turn in a brief report. The Process Recorder will submit an annotated procedure.

Here is the handout I give them. They paste it into their science notebooks.

Luann Lee posted hers here and has a whole bunch of different ones if you're looking for some variety.

Other odds and ends:

I number (not physically) the seats at the tables like this:

seatnumbers


When we go to rows then I use A/B partners but usually we're in tables of four. I rotate jobs on a week to three week schedule depending on what's going on. I use a poster to keep track and the same numbers will have the same role. It's nice because you can quickly determine who is supposed to be doing what. The numbering/letters work well for lots of other stuff as well. "3s go to station 2." "Bs will go first." "Odds start with the timers."

_______________


PS - I forgot to include this in my classroom management post for noobs. If you're a science teacher buy a multi-tool and carry it around with you. I have a Leatherman Juice S2 I got from Target. You have no idea how often you'll be doing quick fixes of various lab equipment in the middle of a period. Also take a look at your desks and chairs and see what you'll need to tighten them up. I've got a folding hex key set that can fit my random assortment. You ed program probably didn't mention how much of your time is spent fixing stuff. 

PPS - If you run an ed program, "equipment maintenance and upkeep" would be an excellent class for science teachers. I have no idea what I'm doing most of the time. If it's not a dead battery or a loose screw I'm SOL. 

Saturday, August 13, 2011

Information about the California Standards Test Part 1

I was going to do a post on questioning routines but I got distracted by a Twitter convo with David Cox and Jennifer Borgioli. It was about how the results of the California Standards Test (CST) can be used. This information is specifically for my California peeps.

Most of the information comes from the technical report. It's scary to look at but it's mostly skippable data tables so it's not a terrible read. There's also the API Information Guide. I'll try to remember to cite when I can but if something seems wonky, call me on it and I'll verify.

Part 1 I'll explain the basics of test construction and API results.
Part 2 I'll discuss the few useful pieces of information I've been able to extract from test results.
I don't think there will be a part 3 but if I get enough questions I'll see what I can do.

If you don't feel like reading, skip to "What are valid score comparisons?" That's the part you'll want to know. The more appropriate heading would be, "What aren't valid score comparisons?"

How is API calculated? 


There are adjustments for certain populations (need to look into this more. Adjustments may just be in order to norm base/growth years. Edit again: The population adjustments are just for finding Similar Schools.), but it basically comes down to a straight mean. Your kids either score Advanced, Proficient, Basic, Below Basic, or Far Below Basic. Advanced and Proficient are good. The rest are bad.  Advanced earns 1000 points, Proficient 875, Basic 700, Below Basic 500, and Far Below Basic 200. As far as I know, the only weird thing is a student not taking Algebra in 8th grade (the General Math test) get bumped down a level in API points. So if she scored Advanced in the General Math test, she would earn 875 points for the school. (Edit: A ninth grader taking the General Math test gets bumped down two levels) Additionally, 7th graders do not get a bump for taking Algebra in 7th and the same applies for 8th graders in Geometry. After that, each test is weighted and the mean is calculated. The CAPA and CMA follow the same weighting rules as the CST. (edit: added)

From the API info guide (page 6):
Content Area Weights


In high school, the CAHSEE (our exit exam) is also factored in. The arithmetically minded may notice the large drop-off in points from Below Basic to Far Below Basic.

There is an excel spreadsheet to help you estimate your API.

How is the test constructed and scored?


It's a lot. I'll give you the highlights. Tara Richerson has an excellent series on test construction and she's got actual experience at it. Pay attention to how they're anchored to the previous year's test.

There are two things that really interested me. The first was how cut scores for the different levels of proficiency were created. I'm just going to snip and let you read. From the technical report (257):

www.cde.ca.gov/ta/tg/sr/documents/csttechrpt2010.pdf


The Modified Angoff is used for the ELA tests and the Bookmark Method for the rest. Nutshell: A panel reads the questions and estimates what a barely proficient/basic/etc person would get right. Then the median of the panelists is taken. This becomes the cut scores. Science uses the Bookmark Method, so they put the questions from easiest to hardest. Someone then says, "I think a barely proficient person would get it right up to this question about 2/3 of the time and miss the ones after about 2/3 of the time." A bunch of those people are asked and the median becomes the cut scores. ELA works basically the same way except they rate each question and the cut score is computed based on the score. The cut scores and all raw scores are then matched to a table to align the scale scores from year to year (actually they only really align in two-year pairs). This isn't useful to know at all, but I just find it really interesting.

The second thing I'm pointing out is actually useful.  Based on the test results, CA has generated proficiency level descriptors. If I recall correctly, these were generated based on a few years of test results and so are supposed to be things that, for example, a Proficient science student actually knows. These are useful, especially for those of us who need to decide on the level of depth for our standards. It's located here and the good stuff starts in appendix A. 8th grade science starts at A-102. Here's an example:
www.cde.ca.gov/ta/tg/sr/documents/pldreport.pdf

What are valid score comparisons?


There are two main ways people (teachers, parents, admin, everyone) mess this up. People think you can compare scale score from year to year and that you can compare API scores year to year. You can't do either. This is crucial to understand.

In the example that got this started, Student A got a perfect 600 in 7th grade and a 550 in 8th grade. It's a natural question to ask why the student dropped from 7th to 8th. You can't though. California does not vertically align its scores. A 550 in one year has no relation to a 550 the other. Additionally, a 550 in the same year has no relation to a 550 in a different content area. You CANNOT make this comparison.

Horse's mouth (Technical Report, 6):
www.cde.ca.gov/ta/tg/sr/documents/csttechrpt2010.pdf

You are fine comparing the same year/content to other classes, schools, districts, the state. Anything else, and I mean anything else, isn't valid. Jennifer tweeted this link out earlier. If you take a look at the graph you'll see certain test cut scores are harder than others. MS math scores will be lower than elementary scores because our tests are harder to score proficient on.

If you go back up to how the test cut scores are created, you'll noticed they're defined for "Proficient Algebra student" or "Proficient Science Student." They are not scored based on growth from the previous year. Some states do that.1

The API results are similarly misleading. You'd think you can just look at your school's API each year and see if it goes up. Turns out, you can't. That's because how the API is calculated varies each year, for example the weights of different tests and which tests are included. So a 2006 API score can't be compared to 2011. It makes sense when you think about it but it's completely unintuitive and everybody in the entire world thinks you can create a line graph and see how your school is doing.

You CAN compare between base and growth APIs. These will be matched (page 14 of the Info Guide)

www.cde.ca.gov/ta/ac/ap/documents/infoguide10.pdf

and you CAN compare the growth from year to year. Take the Growth API and subtract the Base API. Also on page 14.

www.cde.ca.gov/ta/ac/ap/documents/infoguide10.pdf

Repeating myself in case you missed it: Base and Growth scores in the same cycle compare different year's test scores with the same calculation method. If they are not in the same cycle, they could (and likely do) use a different method for calculation. It is not valid to compare API scores in different cycles.

Does anyone know this? No. Everyone, understandably, compares scores year to year. This is important to know though because if your scores take a dip, it might be because the calculation methods have changed. For example, until the 2010-2011 cycle, high school APIs didn't include the CMA, which is the modified test usually taken by students in SDC.


Summary: You can compare your student scores only within the same grade level and content area. You can compare API scores within cycles (Base to Growth) and you can compare growth between cycles. THAT. IS. IT.

In part 2, I'll write about what useful (for me) information you can get out of it.

1: Vertically aligned scores usually come in 2 flavors. Either the same score indicates the same equivalent level. If you score a 550 one year and 550 another, that means you made the equivalent of one year of growth. Or the score is like the "Reading Level" reports we get and all students are scored on the same scale. One year you get a 400 and the next a 520. You've made 120 points of growth that year. Smarter states will make one year equal 100 points so you can easily see if you made a year of growth. 

Thursday, August 4, 2011

Classroom Management Stuff for New Teachers

Warning: If you've taught more than 3 months, you should probably stop reading. This is going to be very boring. 

I participated in a twitter chat for David Coffey's  Facilitating Learning Environments (#FLE11) class about the first day of school. While it's in my head, I figure I can leave some advice for new teachers.

Remember, context is everything. So my advice is based on 6 years of teaching 7th and 8th grade science in a school located in an urban area. I'm going to focus on classroom management stuff because that's the only reason my school doesn't renew a new teacher. I attribute that more to admin focus than any particular deficiency in our new teachers but that's a conversation for another day.

For most of you this will be entirely obvious. For me it wasn't. I (am still) not a "natural."

Most of this stuff I use because it allows me to be even lazier. Of these, I'd say #3, #5 and #6 save me the most time during the day.

Oh, and best advice? Get comfortable shoes. Actually get a few and rotate.Trust me.
    Classroom Management Stuff:
    1. In the first few days, reading the rules and expounding your philosophy have a (small) place, but really what you need to do is get the kids doing something so you can walk around and learn their names. I get their names when they come in. I get them going with something and then I walk around the class and keep practicing out loud. Guess and let them know it's ok to correct you because how else will you learn. Use a mnemonic or some other memory technique. For the hard ones I ask them a question and picture them doing it. "What's your favorite sport/movie/book/etc?" Then it's, "Jesus who likes the Raiders" and I picture him dressed like fans in the Black Hole. My school starts on Wednesday. I can learn 150 kids by Friday, although I usually forget a few on Monday. Trust me here. Nothing will pay bigger dividends for a tween than you knowing their name.
    2. Don't make gloop and air rockets on the first day of school and then it's worksheets and notes the rest of the year. The first few days should be a snapshot of the whole year. They need to understand what you're about. If you're about worksheets and taking notes, then do that. Well, do that and then talk to me. We have some soul searching to do. My first days are here.
    3. Make sure you establish signals. Everyone talks about procedures but it's signals that will make them work. You'll definitely need a "stop, shut it, look at me" signal. Teach it like a routine. I raise my hand. They raise their hand. Get others attention. Turn your body. Quiet. After they're quiet, you need to keep the silence for an extra beat or two. That's the big one. You'll need to stop kids in the middle of busy and noisy labs. Sometimes it'll be for safety reasons. No matter how open you want your classroom to be, you'll need something like that. The hand raise is (theoretically) my school's universal sign for quiet. If you can get the rest of your teachers on board for a universal quiet signal, your life will be sooooooooo much better. I've tried a few other signals for this (counting down, squeaky toy) but the hand raise is good because it requires them to physically respond and doesn't require you to shout over anyone. I'm not a fan of clapping or chimes but some people really like them. I play a song for clean up (So Fresh, So Clean). Before giving instructions I start with "When I say go..." because whenever I would say "Everyone is going to need a ruler" half the class would stand up and walk over to get it before I was done.
    4. Find your sweet spot for procedures and routines. I know admin go crazy for them but in my first year I probably spent more time teaching procedures than I did actually using them. Stupid Harry Wong. Turns out I don't really care how a kid gets water or goes to the bathroom. Go with a few high yield, frequently used procedures and do them really well. Opening the class, cleaning up labs, and turning in work are good starts. I also teach my kids how to move the desks to get in and out of groups.
    5. I give my students numbers. 3 digits. The first digit corresponds to period number and the next two are unique. So first period goes 101-130, second period 201-230. Kids get them assigned alphabetically. That goes on everything. During random in-between times (like a group finished cleaning up early), give a stack to a kid and have her put them in order and paperclip. Have her put a post-it on the front of the stack with any missing numbers. Especially for the first few papers turned in, try to get this done immediately so you can track down the kids who don't turn in anything right away. They need to know you noticed these things. When you're putting stuff into your gradebook, your papers are already in order so you can just go right down the line.
    6. I do ROYGBIV color coding for each period. Actually OYGBP because red is too inflammatory and I have no idea what the difference between indigo and violet is. Each kid in first period has an orange portfolio and for calling on kids I use colored index cards. I like them better than popsicle sticks because you can put little notes on them.
    7. Teach students how to work in groups. Walk around and comment on how people are working together. Sam's post on participation quizzes is interesting although far too organized for me to ever pull off. Read Sue's post on Complex Instruction and work on assigning competence. I was too structured my first year. I took the reins off too much my second. I'm finding a good middle ground between Kagan and chaos.
    8. I've gone back and forth on group roles but I've decided overall they're a positive. My first year I used Facilitator, Materials Manager, Recorder, Presenter. I wasn't happy with the Recorder or Presenter roles because they were things I wanted everyone to be doing. I liked the Facilitator a lot. It came through especially when I'd need to give mid-course instructions. I could just call over the Facilitators. The Materials Manager is good for a science class. Those middle school kids love to pile around the supply table. I've changed the other two roles a few times and also gone without roles. I've wanted to try these Thinking Roles but I just never get around to it. Riley posted his here.
              Update: The ones I use this year are here.

    Non-classroom Stuff:
    1. Get to know your school secretaries, your custodians, the tech person, and whoever works in HR in your district as soon as possible. Everyone gives you this advice because it's true.
    2. Every principal has a "thing." Figure out what that is. I've had a principal who was big on bulletin boards and classroom look, another who was big on EL instruction, and another who cared mainly about classroom management. I'm not saying compromise your values, but it won't kill you to spruce up the room (my principal would laugh if she read this. My room is always a mess).
    3. Find allies. You've probably heard "avoid the lunchroom" talk. I used to do it. But truthfully teaching is lonely. Between yard duty and working with kids at lunch and after school, I can go days without talking to another teacher. Don't do that.
    4. Committees are a huge sucker of both time and soul. Sports, while more fun, will take at least double the amount of time you predict. Unless you were specifically hired to coach a sport, it's totally OK to turn down all committees and sports. I know a lot of new teachers feel they have to impress the admins, but I have never seen a teacher denied tenure because she didn't volunteer for PTA or coach the soccer team. I was elected (by the other teachers) school site committee president my first year. It was like hazing the new guy. Some contracts say you need to agree to X amount of committees or extra duties a year. First, find out if that's enforced. My unscientific sample of twitter teachers says that it's usually not. If it is, volunteer for things with set time limits and no chance of spilling into extra work. Extra yard duty, scorekeeping, dance and other event chaperoning are all good choices because they have a set start and end. Committees, coaching and anything that involves "organizing" will take up much more time than you expect.

    If you've got any other good classroom management advice let me know. I'll be happy to steal it. 

    More info:
    Zach Shiner has a really cool thing going on here. It's got tons of practical stuff.

    The MS Math Wiki has got a few things as well.

    Saturday, July 2, 2011

    Virtual Conference on Core Values: I betcha think this post is about you

    Well you're wrong. This post is not about you. It's not about my school. It's not about my students. When Riley asked, "What is at the center of your classroom?" I had a simple response—me. Me me me me me. Me. This post is about me.

    I'm not ashamed to admit it. I have needs.

    I need to feel competent.

    I need to feel like I'm getting better.

    I need to feel respected.

    Most of all, I need to feel like I matter.

    I'm not good at my job every minute of every day. I'm solidly mediocre most of the time and downright terrible more often than I'd like. However, every so often I am precisely what a student needs. At that exact moment, at that exact place, with that exact student, there's nobody in the world that should be there but me. I peer over a shoulder and ask the perfect question to get a student unstuck. I hear a response and we perform an experiment that suddenly connects a month of disparate facts.

    I crack a joke at my own expense and the two boys who were going to fight, instead laugh at me and we spend the next few minutes playing the dozens.

    I sit with a student long after the bell has rung while she tells me about her mother being deported. She is scared, but not for her mom. She's scared because she doesn't think she can be a good enough mother for her little brothers and sisters.

    Am I really the perfect person for that moment? I don't know. What I do know is I believe it and that belief is what pushes me.

    And that's where the Ed Reformers miss the point. Maybe a computer can teach the Periodic Table better than I can. Maybe a scripted curriculum will fill the holes in my astronomy unit. But they're telling me that anyone can turn on the computer and anyone can read the script. They're telling me that I don't matter. Most of the time, they're right. Most of the time, you could switch me out for anyone and not much would change. But for brief intersections of time and space, I matter more than all the youtube videos and core standards in the world. My classroom is about me. When it stops being about me, they'll need to find someone else to push the Play button.



    -------------------
    Read more at the Conference Center.

    Friday, June 17, 2011

    Flow Control

    I know. It's supposed to be all about my students. I'm supposed to say something about helping them focus on learning versus work completion or helping them learn to self-assess or whatever. Yeah. That's all true. But even if that wasn't true, even if standards-based grading was no better at that stuff than traditional grading, I'd still do it.

    Why? Because I know of no better way to inform me about what I need to do next. I need to know what I can do to help get a kid from point A to point B. Getting 90% on a Chapter 14 quiz or a B+ on Worksheet 1.6 won't tell me that. I need to be able to tell a student, not that he's failing, but that while he gets how to calculate the average speed of an object, he's still struggling with graphing that motion and here's something that will help.

    And I need to be able to do it quickly.

    Pre-SBG, I'd have needed to open up a packet of work (that's assuming I still had it or that he still kept it), flipped through each page, and then prayed that some sort of recurring pattern jumped out at me. Even if by some miracle that worked, I'd have NEVER done that for every kid on my own. It's just too much work. I would wait until a student took the initiative to actually ask me what he or she was struggling with. I'm sure I justified it as "helping students take responsibility for their own learning." Because, you know, after a student has spent her whole life getting Fs in everything, my F is the magic one that bestows upon her the gift of knowing how to respond to failure. It's like the Triforce. Now that she's collected all of those Fs she can now wish herself into being an A student.

    So here's my advice for those of you who are working on standards-based grading for the summer: As you're setting things up, look at each piece and you should ask yourself, "Do I know how to respond? If I look at this, can I determine what to do next?"

    When you're setting things up (or revising them) think of everything you're doing as a bunch of If-Then statements. On an assessment, if this happens, then this should happen. In my gradebook, if I see this, then I should do this. The strength of standards-based grading isn't that it gives you better information, it's that it gives you better direction.


    Bonus Power User tip: Take a single question, a full quiz, your Do Now or whatever. Write out a bunch of If-Then statements. Depending on the type of assessment it might look like this -  "If you miss #3, then..." or "If you get a 2, then..." or "If you answered 8 m/s, then....." Give the assessment, correct it in class right away in whatever manner you prefer, and then put up the If-Then statements on the board.

    Friday, May 13, 2011

    Upcoming Events: GPD and EdCampSFBay

    Two cool things coming up:

    Global Physics Department

    I should have blogged about this earlier. Every Wednesday at 9:30 Eastern, there is an elluminate session to discuss various topics in physics education. There have been slightly more than 20 participants each week. I've got dinner/family time then so I can never make it live. Luckily, all the sessions are recorded.

    Next week (May 18) Brian Frank will be discussing how to build on student misconceptions (His title: The wrong ideas I love my students to have, and the right ideas I worry about). He's got a bunch of really good posts on misconceptions. In this post, Brian links a John Clement paper on bridging misconceptions that is FANTASTIC.

    Normally you can just show up. But the following week SEAN (OMG I'm geeking out!) CARROLL  is going to host it. Due to space limitations, you'll need to register here.

    The perma-link for the elluminate sessions are http://tinyurl.com/RundquistOfficeHours



    EdCampSFBay

    When: Saturday, August 20 8-4
    Where: Skyline High School, Oakland

    If you don't know what an EdCamp is, hit the link and watch the video.

    You can follow @EdCampSFBay for updates.

    I'm definitely planning to be there so say hello if you see me. And no, I don't plan to run a session.