The Blogora: The Rhetoric Society of America

 

The Unbearable Slowness of Peer Review


Submitted by Jim Brown on June 10, 2011 - 11:49am


A couple of weeks ago, during the Computers and Writing Conference in Ann Arbor (as an aside, this was a superbly run conference by the folks in Ann Arbor), I was part of a Town Hall discussion entitled "The Future(s) of Computers and Writing."

I took that opportunity to bring up what has become my pet issue during the last two years: the unbearable slowness of peer review. This is something I've taken up in a previous Blogora post, and I'm sure people are sick of hearing me complain about it. But I still can't quite figure out why it takes folks so long to set aside a few hours to review a manuscript. As Managing Editor of Enculturation, I ask reviewers to respond within 3-4 weeks. When I mention this timeline to anyone, I get raised eyebrows.

Recently, I've been working on revisions of my own manuscripts and helping some colleagues work through their own revisions. It occurred to me that having to wait 4 months to get a response from a journal completely changes the way we address writing and revision. We are put in a position not unlike the punched card computer programmers of yesteryear, programmers who had to hope and pray that they'd debugged their programs and who would have to get to the back of the line to re-run the program. We send off our manuscripts and/or our revisions, and we wait…and wait. Instead of writing, compiling, looking at bugs, writing, compiling again…we write, submit, and wait.

Perhaps this approach to writing is supposed to be useful. After all, some programmers (and Computer Science professors) would prefer that programs be perfectly crafted (or at least as close as possible to perfectly crafted) prior to compilation or execution. But I'm just not sure that's how most people write. Writing is like hacking: You try something out, see what happens, and then readjust. But our current peer review infrastructure does not allow for this.

Regardless, during the Town Hall in Ann Arbor, I began to think that maybe moralizing isn't the best way to approach this problem. In conversations about peer review, I continue to use the argument that it is our responsibility to provide prompt peer review feedback. And I insist our field's inability to do so is inexcusable. We put article reviews on the bottom of the pile, and I don't know why.

But this argument doesn't seem to be working. Someone in the audience of the Town Hall responded to my complaint by saying that "life happens." That is, sometimes other stuff just gets in the way. And that's just the way it is. Frankly, I find this argument infuriating.

However, I can be as furious as I want. It's not changing anything. So, I'm beginning to think that we just need to completely rethink peer review. The Ann Arbor discussion raised possibilities for collaborative peer review in which reviewers team up and write the review together. Others suggested various crowdsourcing alternatives. I'm open to these kinds of options, but I'm still trying to wrap my brain around how some of them would work.

And part of me thinks that these solutions are still just a way of avoiding the central problem: When it comes to peer review, we don't treat one another very well. I know everyone is busy. I know that life happens. But we somehow find ways to get our other work done, to get our kids fed, to get episodes of The Wire watched. For the life of me, I can't figure out why our discipline is not bothered by 4-month turnaround times on article manuscripts.

Submitted by love more (not verified) on February 15, 2017 - 5:57pm.

Nyc Article

Submitted by Bidobi (not verified) on December 10, 2016 - 8:51pm.

Göbek Nasıl Erir İCİN BACKLİNK LAZIM HADİ BAKALIM neler çıkacak ortaya

Submitted by Paul (not verified) on October 18, 2011 - 1:30am.

This frustrates me no end, but I doubt any kind of "reform" in the way peer review is handled. Instead, it will one day be completely supplanted by a digital (likely cloudsourcing alternative). Unfortunately, that has weaknesses of its own.

Submitted by rherring on June 13, 2011 - 4:16pm.

I'm thinking about this in relation to the recent decision by the American Economic Review to abandon double anonymity in reviewing, in part because John Hobo over on Crooked Timber has responded with a proposal:

Publish referee reports with the journal articles themselves. (If that takes too much paper, make the referee reports a standard e-option, at least.) Give referees the option of de-anonymizing themselves, to stand behind their words.

Although this doesn't have a lot to do with your concern (which I agree is important), Jim, what is relevant is the question of how much time is spent on writing useful reports. There is a certain logic, and perhaps it's even justifiable, in delaying a reading a few weeks until one can give it--and the resulting report--adequate time to feel one has responded responsibly.

Mostly, I suppose, I'm just saying it would be interesting to study whether there's any correlation between length of time a reader takes to return h/er review and the quality of the review. (My guess is that we all have anecdotal evidence to suggest there's no correlation--we all have a story about a one-sentence report that arrived 15 months after submission, and even then, we had to beg for the report... But it would be interesting to see what some carefully designed research into this question, considering readers for the same [or same set of] journal[s], would produce.)

Submitted by Joshua Gunn (slewfoot) (not verified) on June 11, 2011 - 12:57pm.

Jim, I share your pet issue, but four months seems like a luxury to me. I only have one thing in the pipeline (not good for tenure review purposes). I originally sent it to the journal on March 29th, 2010. Once the thing is finally rejected I'll report what the journal is and who the editor is.

I review a lot, and I give myself a four week time frame (it used to be two weeks, but then associate-professorship happened). I agree that it is unprofessional to dally. I see two basic causes.

1. Folks don't care. It's not a valued practice and it's not seen as a priority. Let's just call it out: some folks just don't give a sh+#, and often those folks are the more senior colleagues in the field. I gamble taking reviews seriously is correlated to one's rank; the untenured take it most seriously of all. I know, this is obvious, but still, it seems like folks are afraid to admit this is the core problem.

2. Folks are overwhelmed. If you compare, say, what Jim and I do to what professors did in the 1980s, you'll probably notice a marked increase in responsibilities. "Do more with less" had been the motto of state schools (where most comp/comm programs are housed). We're advising more, teaching more, and asked to publish a heck of a lot more.

Don't get me wrong: I don't think there was ever a Golden Age of Reviewing. Most folks who have been around for a long time will quickly disabuse you of such a notion. But I do think we are expected to publish more, which puts more of a strain on the reviewing system. I agree, then, that we have to start strategizing about how to make peer review more efficient. Volume is overwhelming folks/the system.

Finally, it should be said: If you are a speedy reviewer, you keep getting asked to review. If you are a slow reviewer, you get backlisted. What kind of reward system is this? You get more more if you're responsible, less work if you're not . . . .

A new editor at a prestigious journal just asked me to review a fifth manuscript since January. Why? Probably because I actually do my reviews, and usually in a timely fashion. I think so highly of this journal--and see it as so important to my own career--that I'm doing it. But perhaps this points up another reason for long review periods: we're no longer loyal to "a" single journal, or no longer see our journals like the car companies used to think of their prized models?

Submitted by Dorothy Bishop (not verified) on June 11, 2011 - 12:46am.

It's hugely variable from discipline to discipline, and even from journal to journal. Any decent editor will blacklist reviewers who agree to review and then take months. But alas there are bad editors: see http://tinyurl.com/33lzsvp. I inadvertently found (through an indiscreet journal secretary) that a paper of mine was with the journal a couple of months before reviewers were assigned - and then only when i complained. I also had a bad experience with Archives of Disease in Childhood, whose policy seems to be to send a paper to a (very slow) statistical reviewer only after other reviewers have commented - and regardless of the statistical complexity of the analyses and the stats expertise of the authors. That one took 6 months from submission to decision.
But often the problem is compounded by problems finding reviewers: so part of the lag can consist of editors writing to ask reviewers who then decline. You need a *really* sharp editorial office to keep on top of this: I've edited at PLOS One who are good this way, in that potential reviewers who don't respond promptly get uninvited; ideally editor has specified a list of people and the next one immediately gets invited.
I know there's also a view that people who don't agree to review are bad citizens, but believe me, some of us get so many requests to review (I get several per week) that you can't do them all.
Journals like PLOS One are demonstrating that if the journal takes speed of response seriously, average peer review time comes down markedly, but even with best will in the world, it's down to good will of unpaid reviewers.

Submitted by Josh Gunn (not verified) on June 12, 2011 - 12:49am.

Just read your typology of editors. Heeelarious. And helpful.

Submitted by Dahlgren (not verified) on June 10, 2011 - 12:45pm.

I imagine you are familiar with this project, but I thought I would put it out here:

http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence/

I don't know if it is the answer to bad manners, but it is another model to think about.

Submitted by Jim Brown on June 12, 2011 - 8:19am.

Thanks, all for thinking with me.

@Dahlgren: I am familiar with Kathleen Fitzpatrick's approach to peer review. In fact, I commented on a book chapter of her's (for a collection on David Foster Wallace). I'm intrigued by crowdsourcing approaches, but only if they still ensure that the goal of peer review is met. Peer review of a disciplinary publication requires response from a relatively narrow slice of people. This is not to say that those outside the discipline can't contribute usefully. But if the audience is the discipline, then the peer review needs to reflect that.

Perhaps the crowdsourced approach works best for books, which are more likely (hopefully) to gain a broader readership.

Another idea that came up dring the C&W conversation: publishing revisions/drafts while articles (on the journal website, not on author's blogs) are being reviewed. This happens in the sciences often, and it's worth considering...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.
By submitting this form, you accept the Mollom privacy policy.