direct and indirect effects of “citation padding”

Phil has had a couple of posts now about the practice of journal editors encouraging citations to a journal that they edit, and it sounds like there may be more.  I should say that I don’t recall ever having an editor say something as… direct as the statement Phil quotes, and I do remember being on projects where, on our own initiative, we’ve inserted references to a journal or the work of its editor with “can’t hurt our chances!” rationale.

One might think the specific practice of editors encouraging citations to their journal for impact-factor purposes could be curbed by simply eliminating journal-level self-citations from impact factor counts.  But: my suspicion is that when people insert citations in with the idea of pleasing editors at a specific journal, they mostly don’t bother to remove those citations if the paper gets rejected from the journal anyway.  In other words, when journals encourage authors to cite other articles in their journal, there’s a direct and readily observed effect on impact factor as self-citations, but then there is also this hidden and downstream effect of papers that are published elsewhere.  Depending on the journal’s acceptance rate and how early in the process references are added, the indirect could potentially be substantial relative to the direct effect.

On the bright–if somewhat perverse side–the practice could actually be good for anybody who wanted to try to use networks of citations across journals to make inferences about journal prestige.  Because if a publication follows a chain of Journal A -> Journal B -> Journal C -> Journal D in order to get published, Journal D will have the traces of efforts to please Journals A, B, and C, including citations to those journals, whereas if it had been accepted by Journal A, it wouldn’t have traces to please B, C, and D because it was never sent there.  Put another way, the order in which authors send articles would be a good way of sussing out the hierarchy of journal prestige, but that’s private information, but authors including gratuitous citations to those journals and then leaving them in is a way in which that private information can be made visible.

another peer-review horror story

Just in time for Hallowe’en, Phil Cohen has posted an account of a recent experience of trying to publish an article. The account is more striking when one pauses to think that the story is not getting told because it is extreme in a discipline-wide sense, but that it’s extreme for one of the few folks who write blog posts about things like this. In other words, too many people with too many papers are ending up with these sort of stories.

I appreciated Phil’s forthrightness in the account, particular the part where he reproduced one editor’s request to insert citations to more papers from their journal.
Beyond that, I was particular fond of this paragraph of the summary:

Sociologists care way too much about framing. Most (or all) of the reviewers were sociologists, and most of what they suggested, complained about, or objected was about the way the paper was “framed,” that is, how we establish the importance of the question and interpret the results. Of course framing is important – it’s why you’re asking your question, and why readers should care (see Mark Granovetter’s note on the rejected version of “the Strength of Weak Ties”). But it takes on elevated importance when we’re scrapping over limited slots in academic journals, so that to get published you have to successfully “frame” your paper as more important than some other poor slob’s.


Kieran Healy has written a paper about nuance and posted it here.  It’s an argument that resonates with my own experience, especially in terms of various forays of reading efforts of social theory to talk about the relationship between what they are doing and psychology or, worse, “biology.” While there’s various colorful language throughout the paper, this unadorned sentence hit home for me in that regard:

there is a desire to equate calling for a more sophisticated approach to a theoretical problem with actually providing one, and to tie such calls to the alleged sophistication of the people making them.

genes and infidelity

Phil mentioned in the comments of an earlier post the recent news story about how martial infidelity has a heritability of .4, and the news story also features various more specific claims about the specific genes and systems supposedly involved and the purported evolutionary psychology of it all. Eric Turkheimer, who is hopefully already established as Sociology’s Favorite Behavioral Geneticist, has a nice blog post in which he explains problems with the news story. Enjoy!

aside on the heritability of everything

(Substantial prelude with some light technical bits, feel free to jump to [UPSHOT] or [BOLDFACE PUNCHLINE])

As is shown in the meta-analysis Andy references in his last post, more or less every measurable outcome anybody cares about in any study of human beings is more similar among identical twins than it is among fraternal twins, which in the classical model applied to twin study data means the trait has a non-zero heritability.

Perhaps the major motivation of the giant meta-analysis, however, is evaluation of the extent to which identical twin correlations are twice the fraternal correlation. Continue reading “aside on the heritability of everything”

the imitation game

Quote from Alan Turing that I came across while reading Lee and Wagenmakers’s Bayesian Cognitive Modeling:

“I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.”

assessing james coleman

From Sharon McGrayne’s The Theory That Would Not Die, about Bayesian statistics (versus frequentism):

The chasm between the two schools of statistics crystallized for [Howard] Raiffa when Columbia professors discussed a sociology student named James Coleman. During his oral examination Coleman seemed “confused and fuzzy . . . clearly not of Ph.D. quality.” But his professors were adamant that he was otherwise dazzling. Using his new Bayesian perspective, Raiffa argued that the department’s prior opinion of the candidate’s qualities was so positive that a one-hour exam should not substantially alter their views. Pass him, Raiffa urged. Coleman became such an influential sociologist that he appeared on both the cover of Newsweek and page one of the New York Times.

Bonus for those interested in standardized tests: How did Raiffa end up as in a position to evaluate James Coleman?  The same book tells his academic origin story: Continue reading “assessing james coleman”