Evil Academic Overlords for Peer-Review Reform

As I’ve started copyediting the last batch of accepted manuscripts for Education Policy Analysis Archives (EPAA) from my editorial tenure, I’ve been thinking of John Willinsky’s and Kathleen Fitzpatrick’s comments about academic publishing, open access, the peer review process, and academic credentialing in general. In his incrementalist “let’s push any move towards more open access” view, Willinsky pointed to Gene Glass’s founding of EPAA as an example of one route to access, what Willinsky called the “zero-budget” journal. And Fitzpatrick’s discussion of peer review (in Chapter 1 of the draft for Planned Obsolescence) pointed out the dilemmas of trying to generate a sustainable model of review that’s new. As I’m seeing the end of my duties coming up (you really thought an editor’s duties ended strictly at the end of the editorial tenure?), it’s given me a chance to think about the trajectory away from subscription-based print journals. I don’t know where academic publishing is headed, precisely, but I know what has happened in the recent past.

EPAA is a refereed journal, and I tried to run the English-language review process as close as I could to existing models, with double-blind reviews for the most part. But EPAA was and remains published completely open-access, free to anyone who can download the articles. So it moved one giant step away from the model of academic journals that dominated several decades after World War II, within a prepublication peer-review model. When Gene began the journal in the early 1990s, it was distributed through an e-mail list. This was only one of Gene’s projects to broaden the discussion of education research through email lists, and he set up a number of lists for the various divisions of the American Educational Research Association.

He also set up a generic list on education policy, which is how we met in the mid-1990s. In a postdoctoral position at Vanderbilt, I started exploring lists and this new thing called the Mosaic browser. I subscribed to John Lloyd’s spedtalk list on special education. Then I found edpolyan, which Gene had created, and I became deeply enmeshed in a vigorous 1995 debate about the Tennessee Value Added Assessment System. Eventually I started submitting articles, joined the editorial board, and was encouraged to apply for the editorial position in 2004.

In the past almost-six years, I have learned a number of things most social-science and humanities journal editors learn: how institutional support gives you some time, but never enough; how odd it is that submitted pieces can both fit the mission of the journal and leave you scratching your head on who can competently review them; how hard it is to get ad hoc reviewers to respond to requests; how review logistics are like cat-herding, only without the organization; how uneven your colleagues’ research and writing skills are; how uneven your own are in comparison with some fabulous new scholars; how you never really knew how much you were avoiding learning the intricacies of a particular journal/citation style, and how much more successful some of your journal authors had been at avoiding that; how wonderful many new scholars are, and what a joy it is to give them a venue side-by-side with well-known scholars; what a great feeling it is to organize reviews so you can give coherent advice for revision; how you can be both absolutely on-target and completely off-base in predicting what articles get read, commented on, and cited; and how much you wish you could clone yourself so you could devote enough time to the journal, devote enough time to teaching, devote enough time to your own scholarship, and still have a life.

Running an open-access journal on something close to a zero-dollar budget (the college gave me a little break on teaching, and I had a wonderful graduate assistant for one year to help out), I learned quite a bit more: take the last clause in the sentence above and multiply it several times. A zero-budget operation is not an easily sustainable model to accomplish all the tasks required for a refereed journal. It requires a certain supply of surplus time, and there are no guarantees that an editor (or editorial team) will have the surplus time on a continuing basis for the central tasks, or that a reviewing pool will have the surplus time for refereeing.

Fitzpatrick addresses the reviewing part of the question, or at least the question of what would need to happen with a shift to post-publication review. She is on-target when she points out that the critical element is the evaluation of reviewing. In a standard pre-publication referee process, the editor (or editorial team) filters the referee reports, and any replacement would have to satisfy the discursive element of academic (meta-?)evaluation that Lamont described.

I understand Fitzpatrick’s leaning towards an algorithm, carefully constructed, again because I worry about the time required for thoughtful moderation. My experience with the mass-reviewing process at one of my scholarly societies is not positive: I regularly receive reviewer comments for American Educational Research Association meeting proposals that are widely divergent and often enough show that the reviewer either did not read my proposal or had no clue what the standards of a discipline were. Because of the algorithm AERA uses to apportion session slots to divisions, there is a perverse incentive for divisions to encourage oversubmissions (and I’ve seen that operate in at least one division). That leaves program committee members the distasteful task of looking at an inflated number of submissions with divergent and sometimes irrational ratings by reviewers within a narrow window before recommendations on acceptances are forwarded from the division volunteers to the central processors of submissions. The result is that I frequently see at least reasonable proposals (both mine and others) that are not accepted, while the program has hundreds of sessions each year that are remarkably frugal in their use of scholarship. The frequent ridicule of AERA has its origins in a self-defeating program-development structure.

Maybe a more anarchic approach would work: scholars who have surplus time could become ad-hoc reviewers of working papers that appear online. I occasionally write brief blog entries on papers that are likely to gain attention from newspaper reporters, and I could as easily write entries on working papers that appear online in other areas of interest. The advantage: no one has to organize this, it would be transparent, and readers could judge the work in the context of what I write in other entries (as well as my published scholarship). It would also feed into Google’s pagerank algorithm by linking to the working paper. The disadvantage: it’s anarchic, so idiosyncratic public reviewing of working papers will not satisfy the scholarly credentialing process Fitzpatrick discusses. And though my blog has an ISSN, it would probably not feed into Google Scholar’s algorithm. On the other hand, if more scholars are likely to read and cite someone else’s work because I write about it on my blog, maybe that’s not a bad thing. On the third hand, I don’t really want to be a kingmaker in my subfield. On the fourth hand, maybe the fears of Sherman Dorn as Sole Public Reviewer for a certain area will push others to become more active either on their own or in creating the type of post-publication reviewing/endorsement organization that Fitzpatrick advocates.

I suspect I’m not nearly as fearsome as necessary to spur people to create such a system, but one can always dream of being an Evil Academic Overlord. Organize post-publication review or I shall destroy you!