too many reviewers

I freaked out recently when, after reviewing an article, I received a packet of FIVE (5!!!)  reviews on the same article. I chewed out the editors for wasting my time and told them I would never review for their journal again. After an exchange (in which I got a little less testy), I told them I’d post my concerns to scatterplot and open a discussion on the topic. Although five was over the top and freaked me out, it has become pretty common now for me as a reviewer to get a packet with four reviews. No wonder we regular reviewers are feeling under the gun. The old calculation of two or even three reviews per article has gone by the wayside. The pressure for fast turnaround and the high turn-down or non-response rate among potential reviewers has led editors to send out articles to extra reviewers in the hopes of ending up with at least the minimum two or three.

But this is a death spiral. As a frequently-sought reviewer I get at least four requests a month, sometimes as many as eight, and I used to get more before I got so crabby.  When I was young and eager, I was reviewing an article a week [and thus, by the way, having a huge influence on my specialty area], and I know some people who are keeping that pace. But at some point you burn out and say “no more.” I, like all other frequently-sought reviewers I know, turn down outright the requests from journals I don’t know for articles that sound boring, and then save up the other requests and once a month pick which articles I want to review. So the interesting-sounding articles from good journals get too many reviewers, while the boring-sounding articles from no-name journals get none. If journal editors respond to the non-response by reviewers to boring-sounding articles by sending out even more reviewer requests per article, our mailboxes will be flooded even more and the non-response rate and delayed-response rate by reviewers will go up even more. Senior scholars are asked to review six to eight (or more?) articles per month. You have to say no to most of the requests.

And then we have the totally out of hand R&R problem. I think it is completely immoral to send an R&R to ANY new reviewers. I know a young scholar with a perfectly good paper who is now on the 4th (!!!!) iteration of an R&R from ASR. Not because she has not satisfied the original reviewers, but because the editors keep sending each revision to a new set of reviewers in addition to the original reviewers and, of course, the new reviewers have a different perspective and a new set of suggestions for the paper, some of which cover ground that was gone over in one or more of the previous revisions. Not to mention the problem that R&R memos are now longer than the original articles!!  We are no longer a discipline of article publishing, we are turning into a discipline of R&R memo-writing.

Something has to change.  Senior scholars burn out and get reputations for being difficult, possibly because editors don’t know how many other people are asking them to do things. Junior scholars would want to review and wonder why nobody is asking them, and other junior scholars think they are being tapped a lot because they are getting four requests a year. Article-submitters (disproportionately junior scholars) whine and complain about slow turn-arounds, and imagine — what?  I guess I don’t know what they imagine? Do they even understand what is happening on the reviewer side of the equation? I think some of the more clueless imagine that reviewers are just queuing up to write negative reviews about them and it is all the editors’ fault for not organizing things better.

My purpose in posting is to open the discussion. I think what is needed are some ground rules that would help the senior scholar problem. (1) Reviewer time is a scarce resource. Treat it as such. Do not waste people’s time. (2) No article is ever sent to more than three reviewers. Better is to send to two and ask for a third if there is a split vote.  (3) If a reviewer fails to respond in a timely fashion, they get an email: please respond or we will send the article to someone else. (4) If an editor has three reviews, they immediately send a notice to anyone else they asked for a review saying “we have enough now” or, if you insist, “we have three reviews but they are mixed, and your opinion would help.” (5) If you get two reviews and the situation is obvious, tell anyone else you asked for a review “never mind.”  (6) An R&R is sent back to the original reviewers and to NOBODY ELSE unless there is some very specific issue and the paper author is told at the time what the issue is and the category of additional reviewer who will be solicited. (7) Author angst about turn-around time is dealt with not by sending articles out to eight possible reviewers (!!!!) but by keeping authors informed of their status. Telling an author that they are having a hard time getting reviewers lets them know what is going on. (8) Tell reviewers you want a response to the “will you review?” email within two weeks and cancel the invitation if they do not respond within one week to the follow up to the initial request. Leaving the requests open just encourages the kind of gaming I described and increases the risk of wasting reviewers’ time with too many reviews.

To expand the pool of reviewers among junior scholars, it seems to me that there needs to be a database set up of potential reviewers. This would have to have cvs and samples of the person’s own publications/writing. Does anyone have an idea about how to get such a thing going?

Edit: Just got a note from a managing editor of a major journal that is relevant to this discussion: “I wonder if  > 3 reviews is happening for some journals because they’re using the “automatic invitation” feature that comes with most of this new tracking software we’re all using (you load up the list of potential reviewers with a bunch of names and the database automatically invites the next person on that list if any of the previous reviewers are late — we declined to do this and have set up our system to be far less automated).”

Scatterplotters: Your thoughts?

Author: olderwoman

I'm a sociology professor but not only a sociology professor. I keep my name out of this blog because I don't want my name associated with it in a Google search. Although I never write anything in a public forum like a blog that I'd be ashamed to have associated with my name (and you shouldn't either), it is illegal for me to use my position as a public employee to advance my religious or political views, and the pseudonym helps to preserve the distinction between my public and private identities. The pseudonym also helps to protect the people I may write about in describing public or semi-public events I've been involved with. You can read about my academic work on my academic blog http://www.ssc.wisc.edu/soc/racepoliticsjustice/ --Pam Oliver

48 thoughts on “too many reviewers”

  1. I think this is all very reasonable and I am grateful to olderwoman for raising a stink about this.

    One thing that I just do not understand at all are the journals I have submitted to with huge turn-around times but they never ask me to review for them. If getting reviews is such a problem (because of the burn-out issues ow identities), why aren’t they asking people who have submitted to them for reviews? Maybe some of those individuals (like me) are pretty good on turning around reviews on time.

    Like

  2. It is not necessary for editors to “overbook” reviewers in order to give authors a short turnaround time. At Moby, we wait until our review requests have been rejected before making additional requests– and we average 8 weeks on turnaround with no reviews taking longer than three months. Perhaps it is easier to pull off at a relatively small journal, but I have found that the key is to not let manuscripts sit around in the office for weeks before taking any action on them. We also have a system set up for sending follow-up emails to reviewers when they do not respond to initial requests. We try very hard to spread the workload around.

    I am aware of a bunch of people who are going through the multiple R & R thing at ASR. I especially worry about what it is doing to untenured faculty members who can’t afford to spend so much time trying to satisfy so many different reviewers on a single paper.

    Like

  3. If the publication process is supposed to connect folks in an academic conversation and maybe (just maybe) raise consciousness, contribute to people’s actual lives on the ground or influence social policy, it seems so sad for the discipline that we’re getting in our own way. The conversation olderwoman raises gets at the heart of streamlining the flow of information so that we can all be more efficient and productive. Journal editors oppose sending an R&R back to original reviewers? This makes the least sense to me as a junior person. I hope others comment and share suggestions. I think the final suggestion (asking reviewers to confirm interest/availability) is probably the most crucial step for editors.

    Like

  4. In response to joshtk76, perhaps another norm to add is that only two reviews per article should ever be solicited from senior or well-published scholars and any third (or higher) reviews can only be solicited from junior (not-yet-published-much) scholars.

    Like

  5. I edited above and am also posting as a comment this info I just received from a managing editor of a top journal: “I wonder if > 3 reviews is happening for some journals because they’re using the “automatic invitation” feature that comes with most of this new tracking software we’re all using (you load up the list of potential reviewers with a bunch of names and the database automatically invites the next person on that list if any of the previous reviewers are late — we declined to do this and have set up our system to be far less automated).”

    Like

    1. If you are talking about an ASR-like journal, presumably their editorial staff understands how to run their software by this point, and/or the editors would step in and do something when they started getting 5 reviews on manuscripts when they hadn’t wanted so many.

      Like

  6. I have found that graduate students often give very thorough and constructive reviews that are more helpful to authors than a paragraph from an overburdened senior scholar. Editors, of course, need to closely monitor the quality of advice given and provide clear guidance to authors on how to respond in a revision.

    Like

  7. ASR is an ASA journal. ASA could make a rule about its publications. A genuinely modest proposal would be that it is ABSOLUTELY FORBIDDEN of an editor to request a 3rd R&R for a manuscript, and also forbidden to request new reviewers for a 2nd R&R. PubCom could pass a resolution to this effect in two weeks if it was aware of the problem and cared.

    Beyond this, though, I think the combination of your stances here would paint editors into a corner. Say you are an editor and get a set of reviews back that are all sort-of positive but you think that they might not have fully engaged the paper, or may not have bothered with the methods section of the paper, or whatever. Under the “flagship” model of ASR, what can you-as-editor do in this situation that you-as-critic would regard as satisfactory? Seems like either the editor can solicit some more reviews on the first round, or they can ask some new people to review on the second round. So I could understand feeling like they shouldn’t do one of these or the other, but not both.

    Plus, it does seem like basic logic that a paper that appears in ASR should have been reviewed by more people than one was a plain rejection. So the 5 reviews thing would only get me really angry if it was a paper that could have been rejected with 2 reviews. In other words, I wouldn’t be bothered by an ASR process in which editors sought 2-3 reviews to figure out if a paper was a contender, and then, before providing a positive decicsion, got a couple more reviews.

    This stuff with 4 R&Rs and whatever HAS to be stopped, though. It’s messing with graduate student and assistant professors’ lives.

    Like

  8. There are two things that I would like to see in the review process:

    1. If I am brought in as a review on a second round of a paper (i.e., not re-reviewing a paper I originally reviewed), I would really appreciate advice from editors about how I can specifically be helpful. I recently received a review packet (electronically) with *80 pages* of material. I tried to keep my review to addressing the points that the original reviewers already raised that I deemed insufficiently addressed in the revised version (though I added my own methodological critique because not many people use the method).

    2. I would really like to see more desk rejections. The most frustrating review I ever received was from an editor who said, almost exactly “This paper is not a good fit for this journal” (it was too specific for a general journal). Although I appreciated the helpful feedback from the reviewers, it seemed that this should have been obvious to the editor from her/his own read of the paper. Rather than waiting three months, I could have waited a week and sent it to a more specialized journal.

    To implement this latter recommendation is likely infeasible for general sociology journals because some group would perceive that their research is being persecuted based on methodological, theoretical, subject area biases. I think this is especially true at ASR and other ASA journals where editorships transfer relatively frequently and the stakes publishing in those outlets are higher.

    Like

  9. I review for both sociology and organizational theory journals, and so I’ve seen a lot of variation in reviewing/editing practices. I am also a senior editor at Organization Science, where I make accept/reject decisions, and a deputy editor at Management Science, where I make accept/reject recommendations. I say that just to let you know where I’m coming from when I say that there are a variety of ways to handle the editorial process.

    Like Tina, I wonder if the problem with some of our journals is that editors need to have a stronger voice. Multiple R&Rs with new reviewers at each round is probably a function of the editor not having enough information or time or attention to make decisions, and so they end up outsourcing this function to reviewers. Given the heterogeneity in our field, it’s not surprising that reviewers often disagree with one another. If the editor isn’t willing to take a strong stance or just doesn’t have the time or attention to thoroughly read and evaluate a paper, then it’s much easier just to move the paper to another R&R and hand it off to new reviewers. This may be one reason we see so many reviewers and additional rounds of R&R. I wouldn’t just assume that this is a problem of poor editing though. As far as I can tell the problem is pervasive in sociology journals. I think part of the problem is structural. Our journals have too few editors making important decisions.

    One solution to this would be to multiply the number of handling editors and allow more cooks into the kitchen to make decisions. This is what the top organizational theory journals do. ASQ has 8 associate editors, each of whom make up-or-down decisions. Organization Science has 26 senior editors who make decisions. Editors at both journals never ask for more than 3 reviewers. It’s highly frowned upon to add reviewers at the R&R stage. As an editor, I sometimes make decisions based on 2 reviewers’ feedback. Ultimately, it’s my decision whether a paper fits the criteria for the journal. The reviewers clearly provide important evaluative information, but the editor has more information about the process and needs to be responsible to handle the decision thoroughly, fairly, and as quickly as possible. When I took the job, I realized that I was going to need to spend more time with each paper that I handle as an editor than I would doing a typical review. I’m okay with that because they don’t overburden me because there are lots of other editors doing the same thing.

    I think sociology journals would be smart to increase the number of editors making decisions at the journal. You can have one editor who handles the entire editorial process – how many articles do we need? which editors handle which papers? – but having multiple editors make decisions would improve the thoroughness of the review process and lessen the need for multiple reviewers.

    Like

  10. If I may be unrealistic for a minute:

    One problem is the system of journals. Why are there different sociology journals? That is a relic of days when journals were physical objects produced in printing plants by staffs that had little interaction with each other.

    If we built the system today, we would have one ASA publication apparatus divided into departments by subject area with a common set of practices and centralized manuscript processing – all accountable to the same democratic association. It’s crazy to have all these journals, some for-profit, some association-run, with editors always changing, people shopping their articles around, getting asked to review by different editors at once, etc. etc.

    Given this very broken system my preference is just to post working papers and let anyone who wants to read them and offer their feedback. If I dropped out of the journal system completely I would be sad about losing three benefits:

    1. The peer-review seal of quality
    2. Getting work into a central citation network database
    3. Status

    Losing those would be a shame, but the aggravation of this system is extreme enough to make it a close call.

    Like

    1. I dunno, the idea of a central publication system controlled by the democratic association kind of gives me the creeps. Decentralization of control is a fail-safe for diversity and innovation/

      For instance, imagine if ASA democratically decided through one of it’s ubiquitous plebiscites that all articles should be value neutral with regards to policy (or that all articles should support emancipatory and egalitarian agendas or that all articles should close with policy significance and/or implications for managers or that all articles should adopt stance X where X is something you happen not to like). It’s one thing for an individual journal to take such a stance as a matter of editorial mission, but I wouldn’t want such a stance to be imposed at the disciplinary level.

      Or to shift from substantive issues of scholarly content to more procedural matters of review itself, I think it’s very interesting that the radical experiment in the review process (and one that is in the spirit of much of OW’s post and the ensuing discussion) that is Sociological Science did not come about through petitions to the publication committee but through a half dozen top sociologists hanging up their shingle independently.

      Like

  11. I am also deluged with requests to review and can’t possibly do them all. I am offended by the degree of “over-asking” that is going on. So what to do? I think some of the strategies you suggest are useful, some less so. 1) Urging editors to cut their ask to 2 or at MOST 3 is very, very reasonable. They can explain delays to authors if they got turndowns and need to re-ask. Treating failure to respond to a request in 10 days would be excellent as long as this generates an automatic “we are now assuming you turned us down.” Then the practice of sending to MANY potential reviewers, which email has made TOO CHEAP, might be reduced. But I wonder if explaining delays to authors is what they worry about — or more attention is being paid to the (often published) “turnaround time” which is days to a first decision. This is unfortunately a typical r&r which could have been decided with far fewer reviewers than are being used! 2) I also think setting a RULE of no more than ONE r&r before a decision is necessary. Some journals (ASR especially) have very recently adopted a practice of multiple rounds of review. I don’t object to sending to a “fresh” reviewer in addition to the “old” ones who already have a notion of what the paper was like and may not read it as much for what it is now. BUT a fresh reviewer (a) should be told that this is already an r&r revision and the only options are up or down, not another round of revision for reviewers and (b) should know that a conditional accept would be ok as long as the conditions that the editor (alone) is looking at can be done with a reasonable level of assurance. Second, third or (god forbid) fourth round R&R decisions is an editorial failure — making a decison must scare some editors! 3) I think gaming the system by holding on to requests and then deciding en masse for the month is reprehensible, even if understandable. The decide-in-10 days or the decision defaults to decline might help somewhat to alleviate that problem. The bigger issue is (again) failure to make a definitive decision – this time it is the reviewer rather than the editor who is copping out! When I get a request (and contrary to popular belief I do not check my email everyday in detail to see requests immediately, so it amy be a few days after it was sent) and read the abstract I feel obliged to decide right then and there — will I do it (e.g. is this work interesting enough to me and is my opinion important to this journal) OR immediately decline AND pass along the names and emails of junior scholars who can be asked to do the review. 4) I encourage grad students in their final years to start reviewing (and they can send their names in to journals in their area and volunteer) but I also feel it is important to TRAIN grad students how to write (and read) reviews. The principle of double blind reviewing is good, but the practice has moved in the direction of hyper-confidentiality of everything, which stands in the way of exposing students to the review process (I used to have a selected student co-review with me, combine our independent comments into a review I submitted (and also shared with the student) and if they did a good job also tell the editor that this named person co-reviewed with me and was ready to be a solo reviewer. If we don’t invest our energies into building a pool of competent reviewers, we can expect editors to constantly come back to those who do a careful, prompt and constructive review. One “exception” to the “no more than 3” rule on reviewers would be if the 4th were a new reviewer getting a “try-out” by the editor. A separate new database is far less useful to an editor than a clear mechanism for screening and adding volunteers routinely. Recommendations by known-qualtity reviewers is ideal since they know the areas where a junior is strong (even if they haven’t published in this yet). But again, the editor who gives someone a “try-out” has to have enough confidence in his or her own judgment to decide to add them to the roster or not — a long string of “try-out reviews” just returns us to the current too-many-reviews mess. 5) YES, reviewer time is a scarce resource!! But that is not only true of journal reviews but of all the other reviewing we are being asked to do — tenure and promotion requests have also gotten out of hand (I have heard of one place that now expects to get 10 outside letters for a tenure decision!!) and asking for letters of recommendation for senior scholars who already have shown what they can do is a waste of everyone’s time. I have sat on advanced fellowship committees where the senior candidate has to submit FIVE letters of recommendation for their proposal! Bottom line I think is that no one has any confidence in their own ability to evaluate work (including evaluations themselves) and the search for some sort of “consensus” that would remove the necessity to make an actual decision has crippled the system.

    Like

    1. My guess is our modal number of letters for tenure cases is about 10. (Maybe we were the case you are thinking of; if not, another data point.) The rule is we need 3-5 letters at the Department level, and then there is an independent Dean’s office review with all new letter-writers, and I’m fairly sure they don’t collect fewer letters than the Department, so do the math.

      Back in the glorious days when I was at Madison, they didn’t do letters at the departmental level. If the folks there still do that, it’s a great practice.

      Like

      1. your dept was not one of my data points, but it does illustrate the problem nicely. WHY two sets of letters? Why do the deans need any “new reviewers” on top of the FIVE that the department gets? That is just plain silly. Even five is a lot, but 10 is ridiculous. Now I know one more department to refuse to review candidates for.

        Like

      2. Well, we try to err more on the 3-side than the 5-side at the department level for this reason, but we can’t control what happens at the level above us. Plus, it’s not like you want to end up with fewer than 3 and send a person’s case forward where you haven’t followed procedure.

        I was on a committee about revising the tenure/promotion process here, which wasn’t directly about the letters process but it certainly came up. The reason the Dean/Provost wants new letters above the Department level is that they are *adamant* that there are people who give a level of candor to a Dean/Provost that they will not give in a letter to the Department. Truthfully, I suspect they are correct about that. It’s probably more that I’m not sure about departments soliciting letters, or what kind of numbers there should be on each side of it. And I guess I should be clear that don’t know for sure what’s our typical # of letters solicited by the Dean.

        (Also, I’d be actually curious if we are an outlier among AAU private schools.)

        Like

      3. I don’t write letters any differently for a dept or a dean — the request is the same and the decision is the same. And for some schools the “first level” of review that wants letter is the dean (UW being one such). But I resent putting in the effort if I don’t get at least a 20% share of the outside “votes.” It’s a bigger deal (AND a lot more work) than reviewing an article, but the same principle applies. If you aren’t going to trust my opinion that much why bother asking me for it. I think there should be a rule that letter requesters have to say how many letters they are using. And the schools that have to justify every refusal may be a bit more careful in asking for any letters.

        Like

      4. That’s a good idea that request letters should tell you what the usual number of letters that are requested will be.

        My guess is that it’s something where a lot of people write the same letter to a department that they would to a Dean, but then there are some people who don’t. The people I heard say this are people who’ve been in administration a long time and come across like they’ve seen a lot.

        Like

    2. Lost in our side discussion of NU practices was your reference to the idea that there are people would hold on to their review requests and decide once a month which ones they are going to answer. If anybody reading this thread does this, STOP. It’s great that you are willing to do some reviews. Just pick a random date in the month and take the first X that come in. You are doing the discipline a better service with that method than by making all your other requests wait.

      Seriously, apply the two minute rule. It doesn’t take two minutes to say yay or nay to a journal request. Just do it.

      Like

      1. I would estimate that 20-30% of all requests for reviews are never responded to. This probably adds a week or more to the total review time–unless the journal sends out five simultaneous requests for review.

        Like

  12. Great post and discussion.

    In my opinion, a big problem is governance. As far as I can tell, the Chicago soc department has completely abdicated any responsibility for AJS. Why is one of the two flagship journals a personal fiefdom?

    And as for ASR, my understanding is that the Vanderbilt crew got re-upped for a few more years because they said that some members of the department hadn’t gotten the chance to do it yet. (This could be wrong, though I heard it from a well-placed source. But if it is wrong, that is just a sign of the lack of transparency with which this crucially important decision was taken.) What kind of rationale is this? How did the ASR go from being an institution where a top scholar has personal accountability as steward over a defined period of time to one that a (mid-level) department (thus diffusing the accountability) controls, and does so for years on end?

    Obviously, poor governance is not the only factor. But it interacts with the lack of competition to create a really nasty stew.

    Like

    1. [UPDATE: see correction below, will leave this otherwise unedited for comment thread integrity despite possible egg on face]

      Your well-placed source is wrong. Unless there’s been a renewal past December 2014, which I doubt. I was on PubCom at the time and was there for the renewal.

      It’s frankly a fair point to wonder if these renewal decisions are not treated far more casually than they should be. Sure as hell, however, no one said anything like “some members of our department haven’t gotten the chance to do it yet.” That would be insane.

      The ASR editorial team started with four members, and my understanding is now they are down to two, but I’m not aware of anybody else having been added to team who wasn’t originally selected. If so, I’d be interested in knowing the name of the person added.

      Incidentally, I did bring up the ASR R&R explosion issue with the President of PubCom a couple times in early to mid 2012, although to my knowledge nothing ever became of it.

      Like

      1. Incidentally-incidentally, my understanding is that typically the way ASA editorships work is a little like being a department chair, where usually the editors are offered the possibility of a longer term but instead often opt for a shorter term. (They want to do it, but they also want to be done doing it.) It can also be the case that universities/Deans want terms to end shorter than ASA does. ASA probably has the strongest incentive for continuity of any of the parties responsible for how long journals stay in the same place.

        For example, my recollection is that ASR was offered a year longer renewal than what they accepted, and if the Vanderbilt term ends up being longer than other editorships, my guess would be that has more to do with editor preferences than ASA actions. But this sort of thing should all be in the minutes.

        Like

      2. Eep, I may stand corrected and am mildly embarrassed. The December 2012 summary of the PubCom meeting does indicate an extension being offered to ASR after my term expired: http://www.asanet.org/about/committees/publications/December2012.cfm#summary

        It doesn’t give details–even though of course it should given details–so for all I know PubCom did extend them to December 2015 (that’s the six-year max, right?) and allow them to bring another person from Vanderbilt onboard. I don’t know, shouldn’t pretend like I do. Although given that I said earlier that it would be “insane” to just add somebody to the top of a journal because they happen to be in the same department as the editors that were selected through the standard process, I suppose I’ll stick with the view that I hope that isn’t the case.

        Like

  13. Interesting to know what is happening behind the scenes. I had heard that SocProbs used 5 reviewers from time to time, but I didn’t know it was because they were probably “overbooking” the reviewer docket to ensure a full flight.

    I worry now that I’m contributing to this problem because I almost always sit on a review request for about a week to get a little more time before the automated emails from manuscript central start coming in.

    I get about 8 requests a year and usually do about half of them. But I put a lot of effort into the ones I do (at least compared to some of the reports I get back from other reviewers).

    I will say this about cultivating reviewer pools, if an editor can signal (explicitly or implicitly) that my comments helped shaped the decision, then I’m more likely to agree to do a review the next time they ask. I was the outlier for a recent set of reviews, and the editor agreed with my take and not theirs. That meant a lot.

    Conversely, I had big problems with another MS sent to a different journal. The editor completely ignored my comments and let it sail through. I’ll never review for that editor again.

    Of course, rereading what I just wrote, if editors only sought out reviews from people they already agreed with… that could be a problem, too.

    Like

  14. It would be great if the publication committee could look into developing a better instrument for reviewing and re-reviewing manuscripts. For example, I would probably be a better reviewer if the huge open-ended question was a series of small short answers/in-class essay length questions. I also suspect that a smart researcher could go over a large set of reviews and come up with a set of common positive and negative comments that could be turned into check boxes.

    That said, I’m very sympathetic to the problem that editors have–I think very rarely do all three reviewers say, “accept” on a first or second submission. Ben Agger has a nice discussion of this in Chapter 6 of his book Public Sociology.

    Like

    1. Honestly, I don’t think I ever saw three accepts on anything ever in years and years as a reviewer (and editor). And darn little agreement on what sort of R went into R&R either. But that doesn’t mean a good editor can’t see the common threads and decide on conditional accept or decide how to direct what kind of revisions are really wanted in their letter (and get buy-in by resubmitting to some previous and possibly a single new reviewer if the author does what is needed). I’ve had the pleasure of working with good editors (Arlene Kaplan Daniels, Jerry Marwell, Jerry Jacobs, Paula England, Joya Misra, Judith Lorber, Andy Abbott and many more) and am happy to review for editors who are able to make decisions. But I see lots of editors looking for the “consensus” that will NEVER emerge. ESPECIALLY for a paper that advances knowledge, has challenging ideas, prods people to investigate more, etc. BORING papers may reach consensus (too infrequently that is reject) but I would hope that this is NOT what we want to publish.

      Like

  15. I’m looking at myself as a single data point (mid career; R1). I’m definitely not the stature of olderwoman. Since 2010, I’ve published 10+ articles (mostly first/sole-authored in top specialty articles as well as a good experience with ASR). I am not on any editorial boards.

    I’ve only reviewed 7 articles this year with 2 more in the works (so 9 in the first 7 months). I’ve said “no” to 3 or 4 requests. My reviews tend to be 3-4 pages in length (single spaced).

    I’ve recently recommended reject on 2 R&Rs where I was a newly added reviewer. I employed the hard line of whether I thought the article would be ready after a second R&R. In both cases, the authors had opted to respond rhetorically to reviewers’ suggestions as opposed to conducting the suggested robustness checks. I viewed that as not being particularly responsive to the reviewers’ (and editors’) recommendations. Both were flawed for other reasons as well, IMO. I am not sure whether the authors would be better off with an additional R&R versus starting a submission elsewhere. Having an additional R&R still gives authors decent odds of getting an accept (presumably at their more preferred journal).

    So, as one mid-career data point, I think I’m being underutilized. A structural solution might be to expand editorial boards and have more action editors. There are probably many associate professors like me who just aren’t on people’s radars (assuming that I am a decent reviewer). This would alleviate the strain on the many nationally recognized names that populate editorial boards and are overwhelmed with ad hoc requests. It would also help build the reputations of the next cohort of scholars.

    Like

    1. Yet again, the stunning realization that I’m doing way more than my fair share of review work. When will I get it through my thick skull?

      Like

  16. a bit of data, based on past ASR editors’ reports:
    I calculated both A. rejects as a percentage of rejects plus r&r that led to reject AND B. accept plus conditional accept as a ratio to r&r rejects.
    For 2011 A=.68, B=.50
    For 2010 A=.66 B=.34

    For 2006 A=.84, B= 1.39
    For 2005 A=.80 B= 1.37
    Simple conclusion, something has changed: the probability of r&r meaning it is closer to being published than not. I conclude that too many papers are getting r&r and ending up rejected (after wasting more reviewer time) even before one estimates how many r&r or conditional accept and accept went through multiple r&r to get to a decision of any kind.

    Like

  17. This discussion made me think of a movement in neuroscience initiated by Chris Chambers at Cardiff. (Chris is writing a book for me tentatively called The Seven Deadly Sins of Brain Science.) The issue in neuroscience is that the top journals want to publish path-breaking results, but most brain science experiments – which are time consuming and expensive – shouldn’t produce them. His solution is pre registration, meaning that journals agree to publish based on the novelty of the experiment, before the experiment is conducted, rather than the novelty of the results, hopefully leading scientists to be less inclined to need to puff up their claims.

    http://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration

    Like

  18. Just agreed to review for a journal (not soc, a psychiatry journal) which says in the author instructions:
    “Please note that due to our commitment of rapid turn-around for authors, we may make an editorial decision prior to your due date if we have a sufficient number of reviews already submitted. However, we will strive to notify you prior to the decision-making so that you will have time to provide your completed review as well.”

    When I asked them for clarification on how many reviewers they ask and how many they need they said:
    “Generally, we try to secure 4 or 5 reviewers for a paper. The number sufficient for a decision can vary widely, on a manuscript-to-manuscript basis, anywhere from 2-5, depending upon the quality of the reviews received, the consistency of opinions among the reviewers, how long reviewers are taking to return reviews, etc. In general, though, the editors prefer to receive at least 3 reviews before making a decision.”

    I think I’ll still do the review (paper is interesting to me), but complain to them and put them on notice that I might not review for them in the future because of this.

    Like

    1. Lining up 4-5 reviewers and making a decision as soon as “enough” reviews come in seems like a reasonable editorial policy in a world in which a non-trivial fraction of reviewers who agree to review a paper don’t deliver a review at all or deliver it ridiculously late. As long as the reviewers who didn’t submit their reviews are immediately told, “we have all the reviews we need, but if you want to write a review we’ll send it to the author later,” the journal hasn’t really wasted any of their time. (Assuming most reviewers read the paper and write their review of it on the same day.)

      The “losers,” I suppose, are the editors of Journal B-D whose requests for a reviews were denied because the reviewer had already agreed to review for Journal A and had a self-imposed cap on the number of review requests he or she accepted. But, on balance, I think the aggregate harm of this is lower than the harm of having authors wait an extra 1-3 months for all the reviews to come in, or for new reviewers to be found if one of the original reviewers backs out.

      Like

      1. I suspect that many people who write thoughtful reviews invest time spread over multiple days.

        I also think that if people agreeing to review know that this is the policy, they’re probably less likely to actually complete their reviews (why should they, if they know the editor is expecting half or even a majority of the reviewers not to, vs. is waiting for their review to render a decision?), creating an escalating cycle of reviewers flaking and journals requesting ever-more reviewers to compensate. This doesn’t seem sustainable.

        Like

      2. Yes, I am one of those people who usually reads the manuscript once, takes some notes and then sets it aside for a few days or a week or two, think about it a bit and come back to it to finish. I find I often don’t agree with some of my first impressions or think they were rash.

        Like

  19. Can the ASA journals implement a policy that submitting an article (whether as first author or something else) requires a commitment to review at least X times in the following Y months, if asked (e.g submit to ASR = commit to review for ASR at least once in the following year, if asked)?

    This would help the problem and help it proportionately, in that those who are benefiting the most from journals’ existence but not supporting them much (e.g. anyone who submits a lot but doesn’t review much) would take on a fair share here, while those who currently review more than their fair share would have reduced strain.

    Enforcement in cases where people don’t follow through could take various means, from reminding people that they haven’t met their commitment, to posting the names of people who don’t meet their commitment and asking them to follow through, to not accepting additional submissions from them until they’ve met their commitment.

    Since lots of papers have multiple authors, this could also help generate pressure from co-authors.

    The ASA journals and/or the highest ranked journals collectively doing this at the same time seems most likely to produce results.

    I think this was discussed several years ago; not sure what happened to the idea.

    Absent this or some other system that rewards reviewing or penalizes not reviewing, I suspect the process will continue to buckle.

    Like

    1. I appreciate the reasoning behind this idea, but I think it would create a significant incentive for people to do grudging reviews, where somebody only reads enough of the paper in order to be able to toss off a few comments.

      Like

      1. I’m not sure this is much of a problem. Most of us have a sense of responsibility to our peers and to upholding our commitments, so we shouldn’t assume this problem would be widespread. And if an editor does get a low-quality review through this process, or one that seems arbitrary and careless, they can write back and say the review wasn’t thorough and helpful, and that it needs to be. There could also be a minimum acceptable length standard.

        .02 …

        Like

  20. At Journal of Marriage and Family we secure 3 reviews for each paper.

    We give reviewers a week to accept the opportunity. Generally people respond within this time, sometimes after our first reminder on day 3.

    The large majority of reviewers provide feedback within 3 or 4 weeks and I am incredibly grateful for the quality of their comments and for their timeliness.

    From this vantage the system doesn’t seem broken. That is, people are mostly being responsible, although when they aren’t it is painful. Please don’t accept an opportunity to review a paper and then not follow through. Also, let the editor know as soon as you can if an emergency comes up and you can’t do a review.

    Like

  21. I’ve been on the road and reading comments but not having time to respond. Frankly, I’ve learned a lot, most of which has made me even more concerned than before. I may write a follow up post when I’m done traveling.

    For now, I am SHOCKED to realize that many journal editors are DEFENDING the need for 4-6 reviewers per paper in one or more rounds of revision, and defending regularly adding new reviewers.

    All you folks out there who are submitting articles to journals and thinking you are doing “your share” if you review 2 or even three articles for every article you submit are wrong. The editors are telling you that the exchange rate is 4 to 6 reviews per submission.

    But that is not enough. As I have repeatedly argued, because over half of article submitters are first-timers who are not yet published, the ratio of reviews to submissions for published authors has to be at least double the average. If I’m right, the “fair” amount of reviewing for someone who already has a professional reputation is 8 to 12 reviews per article submitted! Co-authors of course can divide up their reviewing responsibilities in this calculation.

    I hope you will all revise your self-expectations accordingly.

    Like

    1. Here I was thinking I was doing way more than my share, but I guess by your estimation, I’m doing what I should. Expectations revised.

      I have a second point on this, which is that emphasis seems to be on the editors — but it’s important to keep in mind that editors are handicapped by bad reviews. I see a lot of reviews (I do well over 30 reviews a year — so I see minimum 70 others). And some, I must confess, would not put me in any kind of position to make a decision. And by that I mean ANY decision. So perhaps part of the reason the number of reviewers has grown is because the number of useless reviews has grown as well. If that’s the case, we have an ugly cycle on our hands. First, because there’s an incentive not to write decent reviews (do a bad job at something, you stop getting asked — this is even worse if the process is blind). So second, editors have to ask more people. And then third, those of us who do our job reasonably and review a lot are put in a place where we feel there are “too many reviewers” and begin to balk — making decent reviews even harder to come by. That creates a greater incentive for people to shirk, since they see bad reviews all the time and figure, “why am I wasting my time with this if no one else is.”

      Maybe a cynical view. But at times I’m overwhelmed with reviewing (I reviewed 7 papers in July, though granted 3 of those I was late on). And I keep thinking, “I primarily write books. Why am I busting my butt here?!? Maybe I should just phone it in…”

      Luckily I was trained at a department where I know the importance of doing your part… :)

      Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: