Data to explore implementation of federal graduation rate definition

Florida has released some data on high school graduation rates, using the new federal definition that requires high schools to be responsible for dropouts who immediately enroll in GED programs (previously, those dropouts were deleted from the state’s official longitudinal rate which had been following 9th graders through graduation or attrition). This data provide an opportunity to compare the accuracy of various proxy measures a number of researchers have explored over the past decade. Until now, it has been impossible to compare the accuracy of rates to the ideal longitudinal graduation rate. The new federal definition is close to that ideal–not perfect for a variety of reasons, but pretty good and better than a number of the proxy measures. I have had my concerns about the proxy measures, but now we have something solid to use.

So can we use the latest year’s data? No: the 2010-11 data cannot (yet) be compared with the Common Core of Data variables for Florida’s county-wide school districts, because the Common Core of Data (CCD) is not available for 2010-11 (and diploma numbers won’t be available until the 2011-12 data is released — raw graduation numbers lag by a year). But I found Florida’s data for the new-definition rates for 2003-2009 (embedded in the Word document describing the new rate) and put those together with relevant information from the CCD for Florida counties from 1998-99 (the fall of 1998 would be when the on-time members of the 2003 graduating class would have been in 8th grade) through the latest year available.

Here is the spreadsheet. Data are in the first worksheet, and information on how to read the variable names is in the second. Go to town, folks…

5 responses to “Data to explore implementation of federal graduation rate definition”

  1. Glen S. McGhee

    Quite a range — 39% to 85%. I know some of these districts, and these numbers reflect SES more than anything else. Calhoun, Washington, Gulf, Franklin. All small districts. Grad rates could reflect the intensity of focus on drop-out rates, and interventions, OR it could be social promotion. It would be interesting to couple these stats with FCAT achievement in order to rule out the latter hypothesis.

    But how can any (one) teacher be held accountable for Jefferson’s 39%? This is something to consider with any VAM proposals — and how can measures capture this set of enviromental influences?

    http://www.fldoe.org/eias/eiaspubs/xls/FedGradRateRace_1011.xls

    I see here that Jefferson County is tiny, and this unduly biases its standing.

    You have to wonder about scaling effects for the tiny/middle/massive districts; these dynamics are probably emergent all over the place. At the statistical level, grad rates for the tiniest districts jump when just one student drops out. This is a clear example of what I mean.

    But you have to guess at the dynamics for the larger, massive districts. None of these are quite so obvious when it comes to the sixties grad rates of these behemoths that stomp in our swamps here in FL.

  2. CCPhysicist

    Thanks for including the .doc file. I really liked the phrase “the rate fluctuated from 56.5% … to 70.6%” when there is only one (insignificant?) drop of 0.5% in one year out of 8. That curve does not look like a “fluctuation” to this physicist.

    I am astounded. There are counties with grad rates consistently below 50%, although one went from 33% to 48% in three years. Do they get a gold star for a 50% improvement? They probably deserve one! (I’m being quite serious. That is a dramatic change.)

  3. Glen S. McGhee

    Doesn’t this assume too much? Curve fitting like this has a very, very limited applicability, mostly to physical reality.

    As Lew Coser used to say, this a case of the technological tail wagging the empirical dog.