direct and indirect effects of “citation padding”

Phil has had a couple of posts now about the practice of journal editors encouraging citations to a journal that they edit, and it sounds like there may be more.  I should say that I don’t recall ever having an editor say something as… direct as the statement Phil quotes, and I do remember being on projects where, on our own initiative, we’ve inserted references to a journal or the work of its editor with “can’t hurt our chances!” rationale.

One might think the specific practice of editors encouraging citations to their journal for impact-factor purposes could be curbed by simply eliminating journal-level self-citations from impact factor counts.  But: my suspicion is that when people insert citations in with the idea of pleasing editors at a specific journal, they mostly don’t bother to remove those citations if the paper gets rejected from the journal anyway.  In other words, when journals encourage authors to cite other articles in their journal, there’s a direct and readily observed effect on impact factor as self-citations, but then there is also this hidden and downstream effect of papers that are published elsewhere.  Depending on the journal’s acceptance rate and how early in the process references are added, the indirect could potentially be substantial relative to the direct effect.

On the bright–if somewhat perverse side–the practice could actually be good for anybody who wanted to try to use networks of citations across journals to make inferences about journal prestige.  Because if a publication follows a chain of Journal A -> Journal B -> Journal C -> Journal D in order to get published, Journal D will have the traces of efforts to please Journals A, B, and C, including citations to those journals, whereas if it had been accepted by Journal A, it wouldn’t have traces to please B, C, and D because it was never sent there.  Put another way, the order in which authors send articles would be a good way of sussing out the hierarchy of journal prestige, but that’s private information, but authors including gratuitous citations to those journals and then leaving them in is a way in which that private information can be made visible.

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

3 thoughts on “direct and indirect effects of “citation padding””

  1. I love the idea but I don’t think you’ll find that effect, because lower status journals are more likely to do the padding, and even if high status ones do it would be hard to measure because people cite high status journals a lot anyway. Maybe.


  2. Good points, Phil. They got me thinking.

    Do you think the following set of assumptions would hold?

    1. Papers published in high status journals were first submitted to those high status journals without having been rejected by lower status journals first.
    2. Papers published in high status journals do not reflect citation padding to any future lower status journals that the authors might be planning to submit to upon rejection from the high status journal.
    3. The collected corpus of papers submitted to high status journals reflect a non-padded baseline of expected total numbers of citations as well as an expected dispersion of citations across the possible classes of journals (not to mention books, websites, journalistic articles) citable in that field.

    If those assumptions hold, it seems possible to develop a training set of citation practices per field based on the way citations of the papers published in top journals are dispersed across top tier, secondary, tertiary, and non-academic publications. With that collection as the training set, it would be possible to see if test sets of papers submitted to non-top journals exhibit a different dispersion pattern.

    Please correct or add to my assumptions if they are faulty.

    Liked by 2 people

  3. Oh myyyy citation padding by low prestige journals. Must be the only place it happens? I’m reminded of Campbell’s Law which points out that

    “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

    The fact of the matter, citation counts are about as accurate way of measuring “scientific impact” as the SAT is for measuring an abstraction “intelligence.” Both are primarily measures of conformity with pre-existingelite norms. Both the SAT and citation counts are also both easily gamed by those already in positions of power and authority.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s