Is the College Board trolling high schools?

This week, we had a round of news stories about how average SAT scores are at their lowest level in years. In an example of particularly weak reporting, the Washington Post‘s Nick Anderson wrote,

The steady decline in SAT scores and generally stagnant results from high schools on federal tests and other measures reflect a troubling shortcoming of education-reform efforts. The test results show that gains in reading and math in elementary grades haven’t led to broad improvement in high schools, experts say.

All those experts who are saying? Fordham Institute’s Mike Petrilli’s the only one quoted in the article on the (minimal) national average trend, other than the assessment head for the College Board, which owns the SAT.

Continue reading “Is the College Board trolling high schools?”

Mistakes in brief

So there’s this in response to this, which commented on this, which itself was a response to a whole lot of 10-year retrospectives on Katrina, some of which are full of motivated reasoning and some of which combine a pretty good empirical base and caution about drawing policy conclusions.1 And then the following from this: “reading his post reminded me how much more I enjoy debating within the family rather than debating with folks who will fundamentally oppose [my preferred policy position] regardless of what the data says.”

My impression: when I have my wits about me, I let everyone live down their mistakes du jour because I commit plenty of my own. But it certainly is a mistake to imply to independent observers that you want to work in a bubble or write in a bubble.


  1. At first glance, there is far too little material on the ten years since Katrina that is historically informed, but that can be remedied. []

The arcane art of writing review letters for tenure/promotion

A colleague recently asked me how to write external review letters. These are letters written by nationally-known scholars who are asked to provide an independent outsider view of a faculty member who is up for tenure and/or promotion in rank. My first thought: Thank goodness there still are tenure-track positions, so view this as a wonderful obligation. This does not apply to the majority of faculty in the country, but it is important to understand both by letter-writers and by faculty going up for tenure. Assistant professors often hear all sorts of advice on compiling lists of potential reviewers, and I consistently tell colleagues that the most important qualification is the ability to put a body of scholarship into context. Someone who is nice to you but cannot perform that job should not be on the list. Someone who disagrees with your advisor on a critical question in the field but has a track record of being fair and articulate in reviews (or in contexts where you can see someone “review in the wild”)? Absolutely on the list.

In my response to my colleague, I explained my perspective in reading external review letters, as a member of peer review committees and as an academic administrator. For what it’s worth, here are the types of traits that make an external letter valuable for faculty review committees and administrators:

  1. A clear statement of the letter-writer’s relationship with the faculty member — stating whether there have been any collaborations, especially, or if there are institutional commonalities (I once received a request to review a former fellow grad student at Penn, someone who’s a good friend). If it’s more than an “I’ve known the person primarily through interactions at conferences and been authors in the same anthology two times” type of connection, pick up the phone and call the person who asked you to review. You may still be the right reviewer, but transparency is the key to a valuable external review letter.
  2. Putting the intellectual contributions of the faculty member in context — most useful to me has been when letter-writers explain two features of a body of scholarship: what is new or unique, and what people in the field generally find valuable. This is where it is important to explain the contributions as if you are writing to a provost who is a biologist and a vice provost who studies Romance languages: what are the key questions in the field, where does this faculty member’s work intersect with and help address those questions. Depending on the vitae and the instructions, you may also need to explain common arrangements in co-authorship in the field, how to judge the practitioner-oriented impact of scholarship (e.g., in education finance, how the use of scholarship in lawsuits should be considered important impacts), and the extent to which the scholar’s work is independent of her/his advisor.
  3. Definitely comment on the specific pieces you were sent in the packet, as part of putting the scholar’s work in context. Some of the least useful letters I have seen have entirely ignored what was in the packet. Don’t be as detailed in the same way you would with a journal manuscript review—but do explain x your assessment of the contributions, and use the pieces as examples of the outlets as well.
  4. If asked to comment on the quality of journals/outlets, assume that the scholar, department chair, or committee has to crunch numbers and focus instead on the general audience and reception of the journal—e.g., not just that a journal is the primary research outlet for an important learned society, but that it continues to have an important impact in the field, is broadly read, etc.
  5. How to handle weaknesses in scholarship: be descriptive first and then put in context/evaluative language. E.g., “Dr. Dorn’s quantitative work generally uses bivariate correlations decades after social-science history has become accustomed to multivariate analyses. Most of his original research is archival, where his original discoveries about postwar special education history have become his most important contribution.”

Additional note: Instructions to reviewers vary all over the map – at ASU, we ask reviewers to judge scholarship against our standards, which we send to reviewers. Sometimes, instructions ask you to evaluate how a scholar would rank against our colleagues, or against the universe. And sometimes there are no instructions at all – that’s another time to pick up the phone and ask for more information.

After a year at ASU…

Yesterday was the first day of classes at Arizona State, and the start of my second year here. For the most part, my job has been as expected, with the bonus surprise of an incredible supporting group of staff and academic professionals. In addition to orienting myself to a large college as quickly as I could, a good chunk of my time last year went to supporting colleagues who were reorganizing our EdD program for entirely-online students. While the program before this year had a number of elective options, an online program required a more cohesive curriculum sequence, and we used the opportunity to fill in some holes to make connections tighter between courses and the broader program goals. That new organization began rolling out with the summer entry cohort in downtown Phoenix, and I taught the first new course (great experience, time crunch kicking my posterior, and I would not have had it any other way). We have our first online EdD cohort admitted now, ready for classes starting in October.

This year, my division has a number of initiatives, including the conversion of an existing face-to-face masters program to online status, the start of two new masters programs, and some other plans that would only make sense to insiders. I started some research projects with grad students in the summer, which led to a few conference proposals by the end of the summer along with future possibilities in various directions. And I have some other responsibilities that go along with the role of a division director (or department chair or school director, take your pick of title), in the sense of everyone being responsible for the success of the college as a whole.

This week, Dean Mari Koerner announced that she plans to step down as dean at the end of the current year, her tenth at ASU. I will have more to say about Dean Koerner at a future point, but for now I will just say that she led a nationally-recognized reform of teacher education when many programs elsewhere were in a defending-the-ramparts mode and persuaded me I would have a fulfilling role here. Mary Lou Fulton Teachers College is a dynamic college filled with great colleagues, in large part because of her.

Another dirty little secret of test-based measures

New York State’s department of education recently reported that approximately 20% of students in testing grades refused to participate in this year’s state assessments, the high-water mark thus far of the opt-out movement. Among the various stories and arguments flowing from that is the argument that 20% refusal is easily over the threshold of non-participation that invalidates conclusions drawn from testing. This is an argument made by both proponents and opponents of test refusals.

Here is the dirty little secret of the existing system of accountability: plenty of measures already are working on less than 80% coverage for many schools. Three come to mind this morning, with only one cup of coffee in me:

  • The federal graduation rate definition excludes students who move away from a school after ninth grade — thus the adjusted-cohort part of adjusted-cohort graduation rate. In areas with high student mobility, well over 20% of the original cohort will be uncovered in the graduation rate. (Are those mobile students truly counted somewhere? No one knows.)
  • Value-added algorithms require multiple years of test-score data. Again, with high student mobility, plenty of schools and a higher proportion of teachers have value-added indicators that come from far fewer than 80% of the students taught in a year.
  • In many states, even cross-sectional test data is limited to students who attended a particular school for at least a good part of the school year. High-mobility schools have far more than 20% turnover in a year.

If 20% nonparticipation is a measure-killer, we need to worry about far more than New York state’s accountability indicators.

What can happen with missing information in a statistic? At least two things:

  • Bias from the nonrandom nature of missing values. If the participants are different in fundamental ways from nonparticipants, any measure will not reflect the complete population.
  • An incorrect assumption that a statistical estimate is more accurate than it truly is. This is an important finding of the research of Donald Rubin in the 1980s: correcting for missing values generally leads to larger standard errors for statistics.

There are other problems, but these are the main ones to keep in mind.