Education has a whole host of statistics that are unreliable, that have been unreliable or unnecessary or off-target for years, and that continue to be created, published, and reported on. “Dropout rate” is one of those. It’s been around for more than forty years, crafted in the late 1960s when there was no way to find out what proportion of students graduated from individual schools or school districts. Yet some districts said they counted those who left school without plans to return — dropouts, to use a term that had just recently become the dominant way Americans talked about teenagers who left high school before graduation. And because there was visible concern about dropping out, some tried to measure the phenomenon.
The measure was bad, and those who created it probably knew it was bad: divide the number of people that a school says dropped out in a year by the total enrollment in the school. At the time, there were few other choices. There are some reasons why this so-called dropout rate is a bad measure:
- It does not measure what people are interested in, the proportion of teenagers who graduate or its inverse, the proportion of teenagers who never graduate. That is the common-sense question parents, administrators, and the public have. Dropout rate does not measure that. It is sort of like measuring the crude death rate (deaths divided by the population) without attempting to measure life expectancy.
- Changes in the statistic are often unrelated to the critical measure of interest (what proportion of teenagers are graduating from high school). A declining dropout rate might mean that fewer students are dropping out and thus more are likely to graduate. Hurray! But it might also reflect unrelated increases in school enrollments in high school (the denominator). This increase in school enrollment can happen because of growth in the teenage population (through births more than a decade before, or through migration). It can also happen if schools are failing more ninth-graders, so a higher proportion of students are taking more than four years to finish. Reverse all of this, and you can then have an increasing dropout rate without any changes in how many teenagers graduate.1
- It is vulnerable to incompetent or corrupt administration: it requires accurately classifying people who leave school. I can create a protocol for schools to use to do this well, which would require that a school classifies anyone who has been absent for three weeks as a dropout unless there is a documented reason to classify the person otherwise (e.g., independent confirmation of enrollment in another school), but that was never done in the 1960s. Houston schools clearly falsified that data in the late 1990s, as was discovered after the “Houston Miracle” became one of former President Bush’s talking-points in favor of NCLB.
Last week, some reporting highlighted decreases in nationwide dropout rates–first picked up by Richard Fry at Pew, then Libby Nelson at Vox, and Alexander Russo. Because a 1988 law required that the federal government publish reports on dropout rates, the Census Bureau began doing so, and has continued to do so; that annual report is the reason for the reporting last week. Now, we have much better data on graduation rates, but we still have reporting on dropout rates because of the machinery of a required federal report, and school districts and states often still publish dropout rates based on (highly-flawed) administrative data.
In the case of the federal data, the long-term trends of Census-based dropout measures are highly correlated with the true measure of interest. But dropout rate is no longer necessary to calculate, and it has all sorts of potential for mischief when schools publish it based on administrative data. It is a zombie measure; I wish reporters would stop volunteering their brains for its continued existence.