This week, we had a round of news stories about how average SAT scores are at their lowest level in years. In an example of particularly weak reporting, the Washington Post‘s Nick Anderson wrote,
The steady decline in SAT scores and generally stagnant results from high schools on federal tests and other measures reflect a troubling shortcoming of education-reform efforts. The test results show that gains in reading and math in elementary grades haven’t led to broad improvement in high schools, experts say.
All those experts who are saying? Fordham Institute’s Mike Petrilli’s the only one quoted in the article on the (minimal) national average trend, other than the assessment head for the College Board, which owns the SAT.
College Board’s Cyndie Schmeisser wrote, in part, “This is a call to action to do something different to propel more students to readiness.” The reaction of experienced education researchers more broadly? Underwhelmed, at least according to folks such as Robert Kelchen. Tom Loveless’s reaction:
No policy moves should be made based on latest SAT scores. Zero. None. @MichaelPetrilli @smarick @kombiz @kdrum
— Tom Loveless (@tomloveless99) September 4, 2015
But in reality, this is at least as much a call to action for more students to take the SAT as for high schools to do something specifically for college entrance exams. Most college students attend non-selective institutions, where even if an admissions score is required, it does not determine admissions. On the other hand, the College Board has been losing market share for years. In 2012, the ACT passed the SAT in terms of high school students’ taking the test. And because most colleges that do require a test score will take either the ACT or SAT, we have a two-player market and a zero-sum game.
How much is this year’s College Board statement a way to boost the legitimacy of the SAT by being the fulcrum of discussion for at least a few weeks, and panic among school boards at the local level? I cannot say for certainty, but we have seen this play before: in the mid-1970s, after a decade of repeated criticisms of standardized tests as biased in various ways, the College Board commissioned a white paper to explain the suddenly-alarming mean SAT score decline — something I discussed briefly in a 1998 article on the politics of accountability. It was about that time that both print and broadcast news outlets started reporting on the annual release of average admissions test scores — data that referred to an entirely-voluntary test taken by a selective group of high school juniors and seniors, a test that was unrelated to the high school curriculum. The long-term result of that wave of panic? The SAT became re-legitimized as a source of information on student and educational quality, to the extent that public four-year institutions commonly require tests because they report on average test scores as if that’s a proxy for institutional quality, and U.S. News also uses average test scores as a proxy for institutional quality.
A few years ago, Steve Glazerman coined a term to describe the fallacious use of national test score data to draw policy inferences: misNAEPery, making a pun off the acronym for the National Assessment of Educational Progress. The term is now used broadly, including by national education reporters. We need equivalents for the fallacious use of admissions test scores to draw inferences about high school practices. I suggested mis-SAT-ery (or maybe misSATyry?). Paul Bruno had a better suggestion:
@shermandorn @tomloveless99 @MichaelPetrilli @smarick @kombiz @kdrum Often accompanied by malprACTice.
— Paul Bruno (@MrPABruno) September 4, 2015
Among many other reporters, Nick Anderson has committed reporting malprACTice this week.