I’m in California helping my mother, who is much better than she was in March, but still mostly housebound, on oxygen, and weak. She’s easily frustrated and demanding, which is both understandable and tiring. One upshot is that I’ve been blog commenting all day. I do get periods of free time, but as I never know when I’m going to be interrupted, it is hard to crawl into the writing I’m way-overdue on. So far I’m 0 for 0 in my goal of working at least two hours a day, although I have gotten some reviews done.
Apropos of bits of work, here’s a question about articles for review. Say there is a pretty well-known data set that is identified with exactly one research team, such that anybody who has read the literature will recognize the authors, or at least the PI, from the data. It does not matter whether you use “identifying reference withheld” or third person references, the author, or at least the author’s research team, will be identified. How much trouble should the author go to in attempting to present the appearance of conforming to the norm of not identifying the author, i.e. in using the third person in describing the research procedures? (Identifying reference withheld would seem absurd in this situation, especially as the withheld citations would be central to any literature review.)
My view as a reviewer: if the author of an article in my area is a senior person, I know who it is. But under the cloak of the anonymous reviewer, I am willing to say critical things about my friends. And good things about people whose work I don’t recognize (and thus know must be junior). I have been told by editors that this is common; people will not say bad work is good just because their friends wrote it. It is the anonymity of the reviewer that is central to the integrity of the process. So I don’t think trying to pretend anonymity is worth the trouble. In fact, I loathe “identifying reference withheld”!! If you are going to anonymize, do it with third person references.
What do you think about an author who does not even go through the motions of third person references to disguise authorship, in a case where disguise would be futile anyway. Is that bad form? Or understandable, let it go? Should editors police this kind of thing? Do they?
18 thoughts on “blog commenting, blinding articles”
Editors with editorial assistants can easily police those things, and some do, so I guess if the editors do not police it, I wouldn’t take it out on the paper but, if bothersome, suggest to the editor that s/he consider policing it.
It’s not okay to do this. You as a reviewer may know who these people are, but another reviewer who’s not as intimately familiar with the field might not. Some authors seem to out themselves as an intimidation technique toward the reviewers. It shouldn’t be allowed.
As an editor, I always viewed author anonymity as a polite fiction. As OW points out, a reviewer who knows who is studying what questions with what data in her field will often be able to identify the author(s) of a manuscript. Sometimes citing oneself is appropriate, if the manuscript builds on one’s prior work. As both a reviewer and an editor, I’ve occasionally gone back to some of these citations to judge whether the current manuscript really is a meaningful advance beyond the earlier work.
I will say, though, that reviewers sometimes guess wrong about who the author(s) of a manuscript might be, and we probably can all cite instances in which a reviewer recommends paying more attention to X’s work, when X is the author.
Not to hijack the thread, but another interesting question is, are there conditions under which it is appropriate for a reviewer to divulge his/her identity in a review? There’s a handful of senior people in the field who refuse to write blind reviews, and I recall receiving a review in which the reviewer wrote, “Since the author would have to be an idiot not to know who I am, I’m going to identify myself.”
I was just having a chat with a friend about the fiction of blind review. We were worried that it is more likely than ever to be able to discover who has authored a paper even if one is not intimate enough with the field to know just from reading the paper. Our main worry was not that there will be blatant bias, as I agree with OW that people will be frank about major errors or oversights regardless of friendship or status in the field.
However, most problems with papers are not major, and the difference between papers that get into a journal and don’t, we imagine, can be quite subtle, as the line must be drawn somewhere. In that case, isn’t it so much easier to give the benefit of the doubt to someone whose work you admire, as opposed to someone you don’t know? In this case, inequalities of the field may become exaggerated as people who work in lower-status institutions or who have teaching obligations that result in fewer papers out the door are also saddled with the burden of being less known to reviewers.
I think this subtle or unconscious bias may be a big problem in the long run, although I don’t know what to do about it. Blind review is quickly becoming impossible, so it is not likely the solution.
“identifying reference withheld”
This approach has always seemed a bit absurd to me; a way of drawing attention to something via the pretense of disguising it.
I have no doubt that Robert Merton’s Matthew Effect is alive and well in academic publishing today. But I’m not sure that the problem is reviewer bias. When a review process works well, the reviewers are advising the editor on the strengths and weaknesses of manuscripts, and it’s the editor who makes the editorial decision. As an editor, I had to reject manuscripts from some very high-profile scholars who I respected, and that was just part of the job. Most editors I know are ardently devoted to publishing the best work they can in their journals, and the identities of the authors just aren’t that salient.
I agree that anonymity is a fiction, but it’s a productive fiction. If nothing else, it signals us that we had to *do* something to figure out who the author(s) was/were, and that therefore we ought to be cautious about using that information. Granted it’s not perfect, but it’s useful IMHO.
I will admit that I find it a fun google sport to work on figuring out who wrote an article I’ve just refereed (I try to use enough discipline to wait until after I’m done!). And I’ve run across some fascinating colleagues that way too. Other than a logo pen from Public Opinion Quarterly it’s the only benefit of any significance I’ve received from reviewing!
Here’s an institutional alternative. Most physical and life science journals (and all federal agencies) have dispensed with blinding authors/investigators entirely: in many fields single blind review is the order of the day. Scientists often express horror at the idea of double blind peer review. Their argument is that reviewer bias is less an issue than the need to trust highly technical claims. That trust is based largely on the reputation of the research group where the work was done because real replication is exceptionally costly in time, energy etc.
Absent the time or ability to directly replicate findings, do you all think knowing an author’s identity provides useful information? Does our double-blind system simply ensure that it’s only the very famous who get the benefit of the doubt? Tina raises this very issue and I’d be interested to hear whether folks think making author identity an explicit part of the process would be a good thing.
Despite the possible upsides, single blind review can be crushing. It’s much more fun to get a review that says “this paper sucks” than it is to get one that says “this paper sucks, .” Of course anyone who’s applied for NIH or NSF funding has probably had reviewers comment on their ability to conduct the proposed research.
@8.spectralvariance: I assume you mean it’s more fun to get one that says “this paper is fabulous”… or some such?
My sense is that reputation bias/inflation is a bigger risk than lack of replicability. That is, a “star system” in which the author’s identity is part of the review is more likely to result in false positives than is a double-blind system to result in false negatives. Perhaps someone better versed than I in the sociology of science could comment on the structure of scientific reputations?
I meant to flag the difference between “this paper sucks” and “this paper sucks, author’s name,” but that last bit got lost somehow. Both are yucky reviews to get. I dislike the latter more.
I suspect you’re right about the false positives v. false negatives. There are also differences in the shelf lives of papers and the rate of retraction of mistaken claims. I guess it comes down to what types of error you hope your institutions will help you minimize. When there’s money on the line we too opt for single blind review.
But it’s not just when there’s money on the line, it’s also when the question is prospective (“can the project be done”) vs. retrospective (“how good is this product”).
Most editors I know are ardently devoted to publishing the best work they can in their journals, and the identities of the authors just aren’t that salient.
I appreciate the sentiment, but let’s be realistic, especially as sociologists. Social pressure is on. For example, it is hard to imagine that a senior established scholar who is friends with the editor is not going to get some extra help along the way. Of course, it is fair to assume that senior established scholar has a good record that makes the piece less risky, but still.
Recently, I was part of a refereeing process where the authors (from a top prestigious program) of the piece outed themselves in the second round by referring to themselves by name and even institution in the manuscript. There was absolutely no reason to do this. I had not guessed their identities after the first round. I had, however, found some serious issues with the manuscript.
I felt that it was either an intimidation or a boasting technique to out themselves. That is, how dare I question the quality of their work given who they are. Bogus. The work was crappy and it wasn’t even that much of a match for that journal. Given these two points, it’s hard to imagine that the authors knowing the editor (a fact I happen to know) had nothing to do with the process.
Maybe this is why I have fewer friends today than I did before I was an editor. Yes, of course, one can’t ignore social ties. But if I felt I had a conflict of interest because of either a history of positive sentiment or animus towards an author, I’d turn the disposition of the manuscript over to my deputy editor, who had impeccable judgment. Sure, it’s not an easy thing to reject a manuscript written by your dissertation advisor, but what’s the alternative?
Now, not all journals are created equal, and mine was responsible to a publications committee that was very concerned with opening access to its journals to all qualified authors. (Yes, it was an ASA journal.) Other journals operate with less oversight, and editors can have an extraordinary amount of discretion in how they operate. I suspect — in the absence of evidence — that personal ties matter less in the editorial decisions of journals responsible to professional oversight committees than in those of free-standing journals.
That makes sense, I suspect you’re right.
Also, just to be clear, I wasn’t questioning your personal integrity in such a process. I’m just not convinced that everyone is as careful.
I frequently review for medical and psychiatric journals, which always have the names of the authors on the cover page. When I review for sociological journals in my area of expertise, I can usually tell who the author is despite the absence of identification. If a reviewer can’t tell who wrote the paper and is motivated to do so, just type in the title of the article on a Google search and you’ll almost always see who wrote the paper because it will usually have been presented at some conference. Anonymity of authorship is such a fiction that it seems time to get rid of it in sociology (along with having to list the place of publication in references to books, as Jeremy has wisely suggested).
@12 reminds me of a past experience. Article 1 from our project was in press when we submitted article 2. To avoid outing ourselves and to give information needed for a full review, we provided full methodological details about data collection in a section that duplicated part of article 1, with the idea that this section would be removed prior to publication. But article 1 got published during the review process, and a reviewer reacted with outrage, claiming that we were trying to sneak past him/her the same article twice, even though the two articles analyzed different subsets of the data in different ways. If, instead, we’d left out the methods details and referred to an available unpublished MS, we would have been accused of outing ourselves to destroy the anonymity of the review process, and “identifying reference deleted” would have meant leaving out methodological details that any competent reviewer would want to see if they were not elsewhere available. Sometimes you really cannot win, and I am impressed at the extent to which it is possible to attribute the worst possible motives to people no matter what they do.
It’s been 5 years since I finished my PhD and I’ve learned academia is a pretty darned small world. Unless it’s a new project, you’ll certainly know the authors (or their students). My view is that the reviewer should inform the editor that they know who the author is. Then the editor should be “Bayesian,” and adjust the reviews accordingly.
As per your original question, I’ve thought one should still go through removing identifying information. First, it could be your student who is working with the data, not you. Second, maybe somebody is using your data for new research. Third, reviewers new to the field may not know every project, and we don’t want personal identities to adjust their opinions. Fourth, there is something to be said for at least striving for objectivity.
I do appreciate spectral @8. Books are single blind and, yet, we still have high quality publishing. Overall, I endorse double blind, but I freely admit it’s not the only effective way to referee work.