The arcane art of writing review letters for tenure/promotion

A colleague recently asked me how to write external review letters. These are letters written by nationally-known scholars who are asked to provide an independent outsider view of a faculty member who is up for tenure and/or promotion in rank. My first thought: Thank goodness there still are tenure-track positions, so view this as a wonderful obligation. This does not apply to the majority of faculty in the country, but it is important to understand both by letter-writers and by faculty going up for tenure. Assistant professors often hear all sorts of advice on compiling lists of potential reviewers, and I consistently tell colleagues that the most important qualification is the ability to put a body of scholarship into context. Someone who is nice to you but cannot perform that job should not be on the list. Someone who disagrees with your advisor on a critical question in the field but has a track record of being fair and articulate in reviews (or in contexts where you can see someone “review in the wild”)? Absolutely on the list.

In my response to my colleague, I explained my perspective in reading external review letters, as a member of peer review committees and as an academic administrator. For what it’s worth, here are the types of traits that make an external letter valuable for faculty review committees and administrators:

  1. A clear statement of the letter-writer’s relationship with the faculty member — stating whether there have been any collaborations, especially, or if there are institutional commonalities (I once received a request to review a former fellow grad student at Penn, someone who’s a good friend). If it’s more than an “I’ve known the person primarily through interactions at conferences and been authors in the same anthology two times” type of connection, pick up the phone and call the person who asked you to review. You may still be the right reviewer, but transparency is the key to a valuable external review letter.
  2. Putting the intellectual contributions of the faculty member in context — most useful to me has been when letter-writers explain two features of a body of scholarship: what is new or unique, and what people in the field generally find valuable. This is where it is important to explain the contributions as if you are writing to a provost who is a biologist and a vice provost who studies Romance languages: what are the key questions in the field, where does this faculty member’s work intersect with and help address those questions. Depending on the vitae and the instructions, you may also need to explain common arrangements in co-authorship in the field, how to judge the practitioner-oriented impact of scholarship (e.g., in education finance, how the use of scholarship in lawsuits should be considered important impacts), and the extent to which the scholar’s work is independent of her/his advisor.
  3. Definitely comment on the specific pieces you were sent in the packet, as part of putting the scholar’s work in context. Some of the least useful letters I have seen have entirely ignored what was in the packet. Don’t be as detailed in the same way you would with a journal manuscript review—but do explain x your assessment of the contributions, and use the pieces as examples of the outlets as well.
  4. If asked to comment on the quality of journals/outlets, assume that the scholar, department chair, or committee has to crunch numbers and focus instead on the general audience and reception of the journal—e.g., not just that a journal is the primary research outlet for an important learned society, but that it continues to have an important impact in the field, is broadly read, etc.
  5. How to handle weaknesses in scholarship: be descriptive first and then put in context/evaluative language. E.g., “Dr. Dorn’s quantitative work generally uses bivariate correlations decades after social-science history has become accustomed to multivariate analyses. Most of his original research is archival, where his original discoveries about postwar special education history have become his most important contribution.”

Additional note: Instructions to reviewers vary all over the map – at ASU, we ask reviewers to judge scholarship against our standards, which we send to reviewers. Sometimes, instructions ask you to evaluate how a scholar would rank against our colleagues, or against the universe. And sometimes there are no instructions at all – that’s another time to pick up the phone and ask for more information.

After a year at ASU…

Yesterday was the first day of classes at Arizona State, and the start of my second year here. For the most part, my job has been as expected, with the bonus surprise of an incredible supporting group of staff and academic professionals. In addition to orienting myself to a large college as quickly as I could, a good chunk of my time last year went to supporting colleagues who were reorganizing our EdD program for entirely-online students. While the program before this year had a number of elective options, an online program required a more cohesive curriculum sequence, and we used the opportunity to fill in some holes to make connections tighter between courses and the broader program goals. That new organization began rolling out with the summer entry cohort in downtown Phoenix, and I taught the first new course (great experience, time crunch kicking my posterior, and I would not have had it any other way). We have our first online EdD cohort admitted now, ready for classes starting in October.

This year, my division has a number of initiatives, including the conversion of an existing face-to-face masters program to online status, the start of two new masters programs, and some other plans that would only make sense to insiders. I started some research projects with grad students in the summer, which led to a few conference proposals by the end of the summer along with future possibilities in various directions. And I have some other responsibilities that go along with the role of a division director (or department chair or school director, take your pick of title), in the sense of everyone being responsible for the success of the college as a whole.

This week, Dean Mari Koerner announced that she plans to step down as dean at the end of the current year, her tenth at ASU. I will have more to say about Dean Koerner at a future point, but for now I will just say that she led a nationally-recognized reform of teacher education when many programs elsewhere were in a defending-the-ramparts mode and persuaded me I would have a fulfilling role here. Mary Lou Fulton Teachers College is a dynamic college filled with great colleagues, in large part because of her.

Another dirty little secret of test-based measures

New York State’s department of education recently reported that approximately 20% of students in testing grades refused to participate in this year’s state assessments, the high-water mark thus far of the opt-out movement. Among the various stories and arguments flowing from that is the argument that 20% refusal is easily over the threshold of non-participation that invalidates conclusions drawn from testing. This is an argument made by both proponents and opponents of test refusals.

Here is the dirty little secret of the existing system of accountability: plenty of measures already are working on less than 80% coverage for many schools. Three come to mind this morning, with only one cup of coffee in me:

  • The federal graduation rate definition excludes students who move away from a school after ninth grade — thus the adjusted-cohort part of adjusted-cohort graduation rate. In areas with high student mobility, well over 20% of the original cohort will be uncovered in the graduation rate. (Are those mobile students truly counted somewhere? No one knows.)
  • Value-added algorithms require multiple years of test-score data. Again, with high student mobility, plenty of schools and a higher proportion of teachers have value-added indicators that come from far fewer than 80% of the students taught in a year.
  • In many states, even cross-sectional test data is limited to students who attended a particular school for at least a good part of the school year. High-mobility schools have far more than 20% turnover in a year.

If 20% nonparticipation is a measure-killer, we need to worry about far more than New York state’s accountability indicators.

What can happen with missing information in a statistic? At least two things:

  • Bias from the nonrandom nature of missing values. If the participants are different in fundamental ways from nonparticipants, any measure will not reflect the complete population.
  • An incorrect assumption that a statistical estimate is more accurate than it truly is. This is an important finding of the research of Donald Rubin in the 1980s: correcting for missing values generally leads to larger standard errors for statistics.

There are other problems, but these are the main ones to keep in mind.

It’s all about (re)bundling

The incipient University Learning Store is a demonstration of why unbundling is unlikely to be the future of higher education. If I understand the linked InsideHigherEd article correctly, this is an attempt by seven universities to create an ecosystem of non-credit microcredentials (or badges) that takes advantage of the broader capacity of the collaboration. There are substantial barriers to creating such an ecosystem, including the types of bureaucratic problems that Amy Laitinen and Matt Reed have discussed recently around the delay of competency-based education guidance from the feds: how do you construct financial aid systems around something that looks very different from a credit-hour system that is the basis for the current federal financial-aid system?

But despite those barriers, the construction of an ecosystem is how innovative higher education has to proceed — while many Americans have been and remain autodidacts, our history is of educational institutions and educational ecosystems.1 Even when we have independent systems of learning, they often develop ecosystems, what my colleagues James and Elisabeth Gee call nurturing affinity spaces. No matter how many times pundits gabble on about unbundling, in reality people want to be supported in learning, expect to be supported in learning.


  1. For a history of autodidacticism, see Joseph Kett’s The Pursuit of Knowledge under Difficulties. I wonder if any of the authors promoting unbundling have read Kett’s history. []

Hillary Clinton higher-ed initiative as Shrek

Hillary Clinton’s campaign is issuing her complicated anti-college-debt plan today, and the analyses are starting to pop up. It may be useful to think of the plan as operating in several layers:

  • Campaign promise as symbolic politics. Campaigns call out to various constituencies in hopes of attracting support on the basis of various symbolic and real affinities. In this case, the intended message is, I care about your family and am competent to protect your interests. (All serious candidates will try to project this message.) See Patrick Riccards’ comments about one of the locations Clinton will use in talking about the plan, as an example of why the symbolic layer matters.
  • Campaign promise as shiny ob–squirrel! Campaigns can also make promises and issue statements as quasi-events in themselves. I do not think today’s release from the Clinton campaign has this as an intended effect — the package is complicated enough that putting together the pieces probably determined the timing of the release, not as a counterpoint to the Trump spectacle on the GOP side.
  • Campaign promise as a predictor of policy initiatives. John Edwards made health care an issue in 2008, Hillary Clinton proposed individual mandates, and Barack Obama’s response to both of them predicted his pushing through health-care reform in 2010.
  • Campaign promise as predictor of governing patterns.

Of these, the last is most interesting to me vis-a-vis Hillary Clinton.

Continue reading “Hillary Clinton higher-ed initiative as Shrek”