Full blog post is at Peter Greene’s CURMUDGUCATION
There was a moment during a presentation at last weeks’ Professional Learning Communities training (institute? gathering? big thingy?) that really illustrated, I think a bit unintentionally, the nuts-and-bolts problems with using data to “analyze” teacher effectiveness.
A chart of data from three classes broken down by three skills was on the screen, presented in student by student format. First, we looked at properly parsing the scores– count the number of students who don’t make the cut score rather than looking at averages for the class. Looked at that was, it was clear that one class had excellent results, one class had middlin’ results, and one had lousy results.
And then Dick DuFour started anticipating the explanations.
The classes might have different compositions of students. The classes might include students with learning disabilities. The classes might be at different times of day. Every possible reason you or I might give. And each one, for our example, was explained away. I honestly don’t remember whether this was a real case study or a hypothetical example, but the classes were, for all intents and purposes, identical in composition.
The progression of his example was clear. After you have eliminated all other factors as an explanation, only one factor remains. The teacher.
After you have eliminated all other factors. Finish reading>>