Friday, September 23, 2011

Connections addendums

In the original post, I forgot to include that I'm going to add pictures along with the storylines when it goes on the board. I'm a big fan of visual anchors. For this one I'm including a picture of the ball and hoop, a picture of the actual superhero Flash (probably not the shirtless Ryan Reynolds one, wasn't he the Green Lantern anyway? I'm confused.), and a photo I have of a student running her experiment.

After posting to twitter, I quickly got these two tweets:

Twitter / @Mr_Martenis: @jybuell Connections are n ...

Yeah. That's the goal. My original plan was to have my students fill these in as we go but then I realized I have no idea what's coming up so I couldn't really make the actual graphic until the end. I certainly know what the goal is but I only vaguely know how we're going to get there. This time I just walked them through the first couple of blanks and then left them to flip through their notebooks. It went fine but it would probably work better if I put together a graphic organizer they can fill in along the way and then translate over.

Along a similar vein:

Twitter / @emwdx: @jybuell very cool - poss. ...

A solid idea but I've got a completely analog class. I can get my hands on a projector maybe once every two weeks and there aren't really computers available for student use. Also, I like having it up all the time and being able to constantly point back to how we got to where we are. I do think this would be cool though and if you do it, let me know.

Connections

I stumbled onto this post about Learning Journeys the other day and it reminded me I totally forgot to blog about something I'm trying this year. First, a visual.

connections


I'll post the link at the bottom of the page in case you're having trouble reading. What is it? It's a storyline of the things we've been learning. You start in the upper left and follow the arrows. Here's a zoomed in version of the bottom left corner.


connections2
Any blank line they're supposed to fill in with a sentence and the boxes should have drawings of the particles.

Here's the part I'm proud of. Do you see that half-line on the middle of the left side of the page? We are going to make another one in a few weeks and connect it to this one. My goal is to connect the entire year together. If I was more organized I could have plotted it all ahead of time and figured out how to get multiple connections (and back to the beginning) but that will just have to wait.

We'll have an ongoing record of how we got to where we are and I'll fill be able to fill up one of those blank bulletin board spaces that my principal is always yelling at me for. Win win!




Here's a keynote 08 version (zipped because I can't figure out how to get dropbox to recognize keynote files) and a pdf. Don't forget to print it without borders.

Edit: Additional comments in next post.

Wednesday, September 21, 2011

The Cycle

Subtitle: The one where I defend cookbook labs.

Going back over my old posts I realized I don't really talk about what my class looks like and where this SBG stuff fits in. My teaching is pretty generic and probably looks like any science class in your school.

I'll do this in two parts. In the first part I'll describe my standard teaching cycle. In the second I'll describe where all these assessment/testing/checklists/etc fit in.


Right now we're in the middle of our first topic on Atoms and so I'll describe that one.


Most of the units start off the same. We're going to try to derive the big idea. For this one, I want the kids to figure out that atoms move around. When they're heated they move faster and spread apart.

I start with the ball and ring demo. Unlike Maestro Hewitt, I start with the ball. I take predictions. Do it and then have the kids brainstorm different explanations for why the ball gets bigger when it's heated. Every year the same three explanations are the most popular. We record the less popular ones for later. Explanation 1 is what you guys and gals will recognize as phlogiston. Heat fills up the ball and makes it expand. This year, we called it the "water balloon model." Explanation 2 is some variation on the ball is made of atoms. When the atoms are heated they expand. It's basically the water balloon model for individual atoms. Last year a student referred to it as the Hulk model after the Incredible Hulk and I stole that name for this year. Explanation 3 we named the Flash model (in honor of the guy that's really fast) after the previous student's inspired Hulk model christening. In case you're curious, phlogiston is by far the most popular explanation.

I realize I'm stealing a bit of the thinking by keeping the names this year but I love them so. Oh and I always ask kids to explain what atoms are (or molecules, particles, whatever 5th grade vocabulary word they throw out without really understanding). I've yet to have a student define it beyond "little thingies" so usually we just go with "little thingies everything is made of" until we can drill down on a definition.

After that I toss a bunch of cookbook labs their way. Yes, I said it. Cookbook labs. I heart me some cookbook labs. I'm a fan of analysis. Some teachers are big on the front end of designing an experiment. If I had to choose, I'd spend a day planning and executing an experiment and four days discussing the results instead of vice versa.

So I tell them what we're going to do and they need to be able to explain what the various models would predict would happen. This is rocky and takes a ton of modeling on my part plus I give them sentence frames to start off the year and then fade them out.

First two things we did was take the ball and get the mass before/after heating. We did the same for some ice water that we warmed up. We compiled the class data, pushed all the desks to the side, circled (actually ovaled) the chairs, got out our big whiteboards, and shared what we thought was going on and which models were supported so far.

Next we dropped dye in hot and cold water and dissolved sugar in hot and cold water and then got in the circle again. This was Labor Day week so it took all four days to get all this done (50 minute periods).

On Monday, after some whole class discussion recapping the evidence we've gathered so far, students spent the period designing an experiment to either test one of the three models or one of their own that we didn't address from way back in the original brainstorm. Tuesday they ran their experiment. In the beginning of the year these are usually pretty derivative of one of the four cookbook labs we did. I'm ok with that. There was a lot of dissolving candy, heating and massing of various metals, and switching in different liquids for the dye/sugar labs.

On Wednesday, we circled up and they shared their experiments, results, and what model they think is best supported by the evidence.

From there, I get in to the talking/notes/demos/phet stage where they're getting actual names for things and deliberate practice. We refine the model a few times along the way but the bulk of the work has been done.

And that's pretty much how all the topics go. We figure out the major ideas through cookbook labs. They design their own experiment to test those ideas. We discuss. I drop vocab and some practice at the end. Pretty standard stuff.

Some topics are more lab-based. I do the density/buoyancy topic almost completely through experimentation and just give them the words at the end. Same for the physics stuff. On the other hand, my periodic table and astronomy topics are pretty weak. It doesn't get too far beyond "look at this and tell me if you notice any patterns" and then a whole bunch of me telling them stuff (while I'm not necessarily standing in front of the class lecturing, it amounts to the same).

Other things to know:

For whiteboards they always need a graph and I insist that the graph has no numbers. I just want them to label the axes and draw a line. I'm a big fan of this. They're too used to mindlessly plotting points and connecting dots and not focusing on what relationship the graph is showing.

I don't know how John gets that great discussion going. Mine are basically just report outs and I force some responses by asking questions of other students (Do your results agree or disagree with what she just said?). Most of my students are English learners so I give them what is essentially a fill in the blank paragraph ahead of time. They can't read it out loud though. It's mainly to help them sequence their thoughts. I also give them the questions I will ask ahead of time so they can prepare. I use my index cards to randomly call on one person in the group to give the main presentation and then other students to answer the questions. Having students prepare together in groups and then randomly call on one of them is a consistent feature of my class.

I used to be pretty crazy about formal lab proposals and writeups. Now I play it fast and loose. I really don't need to see a step by step procedure with every step starting with a verb. Yeah, I was that guy. But I'm really big on students being able to tell me what the various models predict is going to happen. I don't want what they think will happen. I really try to push thinking in terms of what the explanation you built says and then interpreting your results based on those predictions. If there's anything you'd say I'm pretty strict about, it's that. Yes, I realize that's standard hypothesis testing. But I stopped saying "write a hypothesis" because whenever I did a kid would automatically assume I wanted him to guess about what was going to happen and then the experiment was to test if he/she was right or wrong. I die a little inside every time a student writes, "My results were XYZ so that means I was right."


Hopefully you've got a sense for my class. I've got to explain test deconstruction first, but the post after that I'll point out where the informal/formal assessments fit in to the cycle.

Addendum: I only defend cookbook labs as a way to quickly generate data to analyze. I am against "Ok, I just taught you that molecules spread out when objects are heated, now go drop this dye in water and confirm what I just said." That's where cookbook labs get a bad name.

Thursday, September 8, 2011

Information about the California Standards Test Part 2b

Part 1
Part 2

I haven't said anything about the students. This belongs in bold:

Do not make any placement decisions solely based on test scores. 

I know you do it. We do too. In fact, denying students an elective based on their state test scores is one of the joys of being in Program Improvement. But the state of California, perhaps in word but not deed, agrees. From the Post-Test Guide....and I quote:
Any comparison of groups between years should not be used for diagnostic, placement, or promotion or retention purposes. Decisions about promotion, retention, placement, or eligibility for special programs may use or include STAR Program results only in conjunction with multiple other measures including, but not limited to, locally administered tests, teacher recommendations, and grades. (page 13)
and then on page 14 they give almost the exact same quote directed towards individual students in a bolded and boxed callout:
Decisions about promotion, retention, placement, or eligibility for special programs may use or include CST or CMA results only in conjunction with multiple other measures including, but not limited to, locally administered tests, teacher recommendations, and grades.
WAIT!!! One more because this bears repeating (also on page 13)
While there may be a valid comparison to be made between students within a grade and content area, it is not valid to subtract a student’s or class’s scale score received one year in a given content area from the scale score received the previous year in the same content area in order to show growth. While the scale scores may look the same, they are independently scaled so that differences for the same students across years cannot be calculated using basic subtraction. 
Summary: You can't use test scores as the sole method for placement AND you can't use them to determine growth year to year. Wait....but what if a student scores Proficient one year and Basic the next? Shouldn't we put him in a second math class and catch him up? Noooooooooo.  Here's the easiest example why.

2nd grade - 62%
3rd grade - 65%
4th grade - 68%
5th grade - 60%
6th grade - 52%

What's this? This is the percent of students in California proficient in math in 2010.

Either: California sticks all of its best teachers in 4th grade. (unlikely). Approaching puberty makes students universally worse at standardized tests (plausible). The test is harder (that's what this study argues).

Here's the percentage of kids scoring Advanced:

2nd grade - 36%
3rd grade - 38%
4th grade - 42%
5th grade - 29%
6th grade - 23%

(7th and 8th grade we start tracking the kids more. At many schools, advanced 7th graders end up in Algebra and then Geometry in 8th. Some kids even take Algebra 2. A direct comparison is harder).

So a student drops from Proficient to Basic. Why? Hard to say. The test looks like it certainly got harder. With standard error and such he could have really been Basic last year. That's not to mention the inanity of deciding an entire year of coursework based on one 60ish question multiple choice test taken 80% of the way into the year. The test wasn't designed to make those kinds of inferences and you're left with too much doubt about the cause of the drop in score (or whether there even truly was a drop).

I didn't include anything about useful information about students because there isn't any. Well, there almost isn't any. You could probably tease out some information in conjunction with local assessments but really, what's the point? Right now I can tell you that Alondra got 9/13 correct in the "Functions and Rational Expressions" strand of her 7th grade math test. In order to figure out exactly what gaps she had, I need to give her a local diagnostic anyway. And that's really the whole point. If Alondra got an F in the class, I'm more concerned about that than her scoring Proficient. But for some reason, we think that because she scored Proficient on a single day of the year, that's more important than the other 181 days she was floundering. We let a single test override an entire year.

The hours that your school is going to spend dealing with this stuff could be spent doing something useful. You know, like standards-based grading.

Yeah I know. I came back to it. But really, isn't the complete and utter nonsense that is our current grading system a major reason for needing a standardized test in the first place? Right now we look and say, "Oh noes! Jason is basic this year! We've got to get him in extra math!" We have to do that because a B in Mr. X's class is not the same thing as a B in Mrs. Y's class. We don't trust that Jason getting a B in math means anything at all. But really, if we trusted the grades we could say, wait, Jason learned X, Y, and Z. He's fine. Or he still needs to learn Z but that's not going to require us to take him out of art or music the entire year.

If there's a moral here it's that we try to make the CST more than it is. The state of California does an awful job of communicating what the state test really does. I don't really think it's in the state's best interest to do a good job though. If more parents/students really knew what decisions were improperly being made off these score, the state would have a lot of 'splainin' to do. I also think from our end, because we have to devote so much time and energy towards it, we feel the need to justify that.

I've gotten some valuable information out of the test. I've revamped a couple of units. I've gone and visited a few really good science teachers and programs. That's fine. But you're not going to get much more than that. Invest your time and energy into something more meaningful. Oh and never place kids solely based on test scores. Just don't.



Full disclosure: I'm not against standardized testing. I find it useful to have a third party to calibrate my class against. I live in absolute terror that I'm teaching down to my kids. It helps me set a baseline. What I am against is the high stakes part. Even disregarding the punitive aspects of the test, the high stakes nature (and subsequent fear of cheating) has caused California to hide any useful information we might get out of these things that take up so much of our time.

Wednesday, September 7, 2011

Information about the California Standards Test Part 2

Part 1

(You have my permission to skip this post. It's not all that interesting unless you're curious about the lengths we have to go to in order to extract any useful information out of the CSTs)

In the previous post I wrote about the most common ways I see our state test scores interpreted. You can't make a straight comparison of API year to year and you can't use it to track student growth from year to year. The quote from page 6 of the technical guide:
When comparing scale score results from the CSTs, the reviewer is limited to comparing results only within the same content area and grade.
The hard part is they're not *really* designed as anything more than a school report card. (Poorly designed, that is)

In this post we're going to see the two pieces of useful information I've been able to extract from the CSTs. Prepare to be underwhelmed!

You can't compare across grade or content, but you can compare districts/schools/teachers to each other as long as its the same grade/content/year. Basically, you're down to comparing teachers who teach the same content at your own school and comparing departments in different schools.

We don't get item analysis or even standard analysis. The smallest grain we get is by strand. For science, there's six. If I'm reading the Algebra test right, you get a whopping four strands. You'd think you could compare how your students did year to year by strand, but alas, California doesn't norm the strands either so you have no way of knowing if your students improved or if the strand just got easier. You can compare the percentage of students who were proficient in that year's strand to the mean California percentage. I've found it moderately useful to look at the difference between the two and look for big gaps. The problem of course is that California won't release enough detailed strand information for you to really tell what's going on. So my kids perform poorly in "Number Properties" or "World History and Geography." Lots of help there. You might as well tell me to "teach better" and expect results. Oh wait. That's exactly what happens.

I have, however, gotten good results from comparing strand information between teachers at your school (or district if you're lucky). The key to this one is acknowledging that every teacher's class makeup is different. At my school, we send certain types of kids to certain teachers. Even if your school does scheduling randomly, chances are in any one year those classes aren't going to be balanced between teachers. If you acknowledge that upfront, you're more likely to get honest conversation rather than people being defensive about their test scores.
cst2010 (version 1).xls

Here's what ours looked like. We mostly tracked each other. However, the blue teacher seemed to teach the Solar System section better. I'd probably also say the blue teacher performed a bit worse on the Chemical Reactions section. So that's a good start to a conversation. "Hey Blue Teacher, take us through your Solar System unit." This is pretty fast and worth it.

The other useful information I've gotten is comparing schools. Why compare schools? I visit a couple of schools each year to go observe. An easy place to start is with those schools with excellent science scores. This is a tricky one because demographics are so huge here. I've got three years of test scores for the other middle schools in San Jose and the percentage of students on free and reduced lunch and percentage of students who are English learners both correlate about -0.9 to every 8th grade test score. 1.0 would be a perfect line and 0 is a random scattering. 0.9 is really really close to a perfect line.

I've gotten some good information out of graphing that relationship (science score vs. free and reduced and vs. %ELL) and looked at who stuck out. I visited 2 of those schools and exchanged a few emails with another. Unfortunately, it takes awhile to gather that information and it's not all in the same place. You can cheat a bit by going to the CDE website and looking at what schools are listed as the similar 100 schools to yours. If nothing else, it's pretty interesting what the CDE considers "similar." Spoiler alert: They don't seem very similar to me at all.

A quicker way is to just gather a few of the other subject area tests and compare them. Again, you're just looking for a break in a trend. In 8th grade, you can take the History, Science, and ELA scores. Math is trickier because some kids take Alg, some take Gen Math, and some take Geometry (and a very few take Alg II).1


cst2010 (version 1).xls

The Red Line shows the state averages. The Yellow Line shows the county averages. You can see they track each other almost perfectly. Light blue is my school. Nearly every school I've looked at is reasonably close to that nice straight line. Yeah, we're a little better in science than expected but before I start bragging check out the purple line. They kick butt in science (also shown when I graph vs % free and reduced). The light green school? If I'm a social studies teacher I'm going to go visit that school. As a side note, that light green school actually has a population that's 100% on free and reduced lunch and close to that for % ELL. Every other score across the entire school lines up along expected values. If I'm the principal of that school I'm going into the history classes and figuring out what the heck they're doing and doing whatever I can to spread that. If I was a history teacher at a different school, I'd spend some series time observing those classes.


(wait! This got too long. I'm going to split this up and talk about student placement in post 2b)


1: If you wanted to, you could do what California assumes and bump the Gen Math kids down a level (only count Advanced as Proficient) and then maybe take the Geo kids and bump them up a level. You might get a good approximation.