You don’t have to be as flip as Leo Casey to see the problem with Jocelyn Huber’s op-ed in the Tennessean today, which is a generic, bland defense of tying student test scores to teacher and principal evaluation. Huber’s op-ed is almost certainly in response to Monday’s NYT article by Michael Winerip, which identifies (and dramatizes) the concerns of a number of Tennessee educators in the state’s new evaluation system. Like Florida’s and Colorado’s, the Tennessee system has a number of arcane pieces to the algorithm tying test scores to evaluations, and like those and other states, it’s jerry-built.
On the one hand, on principle I think student outcomes should play a role in evaluation. On the other hand, there is something naive or creepy going on when advocates of doing so leave out all the caveats and problems in plunging in without caution. Or, to quote someone with whom I often disagree,
None of this is cause to shy away from incorporating value-added metrics into teacher evaluation and pay. But it’s cause to move deliberately, encourage experimentation, and note that respected, knowledge-based firms like Apple and 3M don’t try to drive all their employees’ evaluations or pay off a handful of uniform data points.
Rick Hess was right in his comments in April, especially the last one: for everyone who cheered the Widget Effect report blasting evaluations and HR policies that treated teachers in standardized fashion, I hope you’re all standing up and fighting evaluation algorithm fetishes in Tennessee, Florida, and elsewhere. Because when you look at it, there is nothing more widgety-absurd than imputing fourth-and-fifth-grade reading scores for the evaluation of a kindergarten teacher or an arts teacher.
All these sparkly-new teacher evaluation systems that put a heavy weight on student test scores for every teacher, willy-nilly? The new widget effect.