‘Well I’ll be VAMned!’ Why using student test scores to evaluate teachers is a sham

Reblogged from the The Washington Post’s Answer Sheet

If by now you don’t know what VAM is, you should. It’s shorthand for value-added modeling (or value-added measurement), developed by economists as a way to determine how much “value” a teacher brings to a student’s standardized test score. These formulas are said by supporters to be able to factor out things such as a student’s intelligence, whether the student is hungry, sick or is subject to violence at home, or any other factor that could affect performance on a test beyond the teacher’s input. But assessment experts say that such formulas can’t really do that accurately and reliably. In fact, the American Statistical Association issued a report in 2014 on VAM and said: “VAMs are generally based on standardized test scores and do not directly measure potential teacher contributions toward other student outcomes.”

Still, the method has been adopted as part of teacher evaluations in most states  — with support from the Obama administration — and used for high-stakes decisions about teachers’ jobs and pay.  “Growth” scores also use test scores to evaluate teachers based on student test scores but don’t control for outside factors.

Use of student test scores to evaluate teachers has created some situations in schools that are, simply, ridiculous. In New York City, for an example, an art teacher explained in this post how he was evaluated on math standardized test scores and saw his evaluation rating drop from “effective” to “developing.” Why was an art teacher evaluated on math scores? There are only tests for math and literacy, so all teachers are in some way linked to the scores of those exams. (Really.) In Indian River County, Fla., an English Language Arts middle school teacher named Luke Flynt learned that his highest-scoring students hurt his evaluation because of the peculiarities of how he and his colleagues are assessed. (You can read about that here.)

Here’s a piece by educator Carol Burris showing, with data, how using “growth” scores to evaluate teachers in New York is something of a sham. Read more>>