Thursday, December 17, 2009

All Else is Never Equal

In a previous post I mentioned one of the problems with educational research. A post by Andrew Gelman brings up another problem. I'm actually quoting Andrew Gelman quoting himself here:
The "All Else Equal" Fallacy: Assuming that everything else is held constant, even when it's not gonna be.
There are many books that try to tell us what works in schools. I certainly try to use them as a guide. However, they make the all else equal fallacy by removing the results from the context. Nearly all educational research looks like this:
  1. Multiple choice pre-test
  2. Treatment Group vs. Control group
  3. Multiple choice post-test
  4. Look at mean, calculate effect size and decide what works
 We then get results in summaries or meta-analyses and are told to do these things because they work. We don't have the context. We're told that all else being equal this will work. But as Gelman says, they're assuming all else will be held constant even when it's not gonna be. What does this mean for us? I guess a few things.

We need to read the original sources as often as possible. While this is not always possible we should try to understand the context of the original experiment as much as we can.


Use research as a map but don't let it take control of the steering wheel. While certain strategies and systems might have a higher probability of working, you can't let it get in the way of what you see in your own classroom.

Conduct your own research. With a standards-based assessment system you are better able to use data to look at differences between your classes and between other classes at your school. In a traditional system all As, Bs, and Cs are not created equal. Even within your own class it's hard to tell why each student has each grade and don't even try to compare students between classes.

No comments:

Post a Comment