devilish details: a reputational ranking of generalist and specialist sociology journals

The following is a guest post by Erin McDonnell and Dustin Stoltz.

Journal reputation or status is sometimes of practical interest to professional sociologists. How well-regarded are some newer and online journals? How do second-tier generalist journals fare versus specialty journals? How do sociological reputations differ from available metrics such as impact factors from Journal Citation Reports? To capture generalized status reputations we asked “Think CVs for an open job: Where would be better for a grad student to publish a solo-authored article?” (see the poll). Today, we examine 23,128 head-to-head evaluations of 92 journals by 422 unique user-sessions. 

For the TL;DR crowd: ASR and AJS are a clear top-tier, followed by Social Forces and then a cluster comprised of some generalist as well as top journals from some specialties. A dendrogram analysis based on similarities in win/loss patterns identifies five clusters: 1) ASR and AJS; 2) Social Forces, Social Problems, and Demography; 3) a cluster comprised of top Theory and (mostly quantitative) Methods journals, as well as specialist journals for Gender, Family, Organizations, Education, Networks, Race, Economic, Medicine, Culture, Social Psychology; 4) a cluster dominated by second-journals in the above specialties and top-journals for other specialties (Religion, Urban, Mobilization, Politics, and Qualitative Methodologies; 5) a cluster of lesser-known journals, which brings together journals with low impact factors as well as high-impact-factor journals that are very influential in some circles but not widely known in sociology as a whole.

Before we go further, let’s preemptively address some potential misunderstandings, because the internet is after all the devil’s playground. Is this a complete and comprehensive view of all potential publication outlets for sociologists? No. We’re imperfect humans and forgot to include some (Sorry European Journal of Sociology) and some fields are under/not represented because we didn’t seed them and users didn’t supply them until it was too late (Environmental Sociology was a later addition). Are we advocating that people review CVs by journal prestige instead of the caliber of writing? No. This was a short framing device that felt like a decent way to approximate generalized gut reputational instincts about journals in a way that was meant to feel familiar enough to be realistic. (FWIW, the sheer number of applicants for any job these days means that whether we like it or not, often journal prestige will play some screening role and ignoring that won’t make it go away.) Don’t we realize that grad students rarely publish in Annual Review because it invites established experts? Sure we know that; it was added after a Twitter user indicated that Dan’s older post was useful for him in his tenure and promotion documents so we added ARS despite the misfit to the grad framing of the question. Are we suggesting that people should decide where to send articles only based on reputation rather than fit? No. We would never say that someone doing a piece that makes a clear and important contribution to the field ‘sociology of education’ should try to get it published in the journal Demography simply because it is higher ranked. Of course an article’s appropriate fit to the target journal is an important issue in getting anything published. That said, sometimes authors will face decisions in which understanding how reputations will be received may be helpful, for example in the wake of a rejection from a journal, about whether to revise an article to take it in one direction or another where it could be helpful to have a rough estimate of what the relative reputational trade-offs are, say of taking it to the next lower status generalist versus to the best specialty journal for which it is a fit.

The Data

There is some interesting work suggesting that, in general, “pairwise wiki surveys,” like All Our Ideas, perform relatively well as compared to traditional surveys. However, let us make this clear: this is NOT a randomized sample of all sociologists with a high response rate. The survey was circulated via social media by Erin McDonnell on Facebook and Twitter (the vast majority of responses came via Twitter, with Facebook links accounting for only 5.4% of responses). The Tweet was tagged with the general sociology hashtag #soctwitter and subsequently shared by 23 other sociologists on Twitter (to date). Here we analyze the first nine days of data (1/15/20 to 1/24/20, though the survey remains open for now), which includes 23,128 head-to-head ratings on 92 journals from 422 different user-sessions (hereafter for convenience, “users”). Ratings are usually snap judgements: the vast majority of item response times are less than two seconds. 

Each user provided an average of 54 ratings (including “nonvotes” for example when they said they were unfamiliar with one or both journals). 75 users provided 10 or fewer ratings, while 57 users provided more than 100 ratings. Four users were high-end outliers, providing 500 or more ratings however there appear to be no clear biased patterns in the responses of these high-end raters (the famed Fabio Effect?). Only half of those who shared the post on Twitter were people Erin already followed, suggesting at least some network diversity in distribution. Some of the results similarly suggest responses are not overly dominated by a narrow interpersonal network: For example, organizational and development journals, two areas in which Erin works closely, are some that perform the poorest on this listing relative to other metrics of their quality such as impact factor (see ASQ and World Development, for example). 

General Observations

Are there tiers? The results show a clear top-tier of ASR and AJS (click here for a higher resolution image of the rank plot). No real surprises there. AJS lost 46 times (out of 499 ratings), and interestingly, only seven of those were to ASR. Conversely, ASR lost just 30 times, including twice to AJS (out of 532 total ratings). Results largely confirm anecdotal assertions of a second tier, here comprised of some generalist journals, the top-two theory journals, the top-two methods journals, and the top journals in *some* sub-disciplinary areas. Notably, sub-disciplinary areas whose best work tends to be featured more heavily in top generalist journals — particularly race/ethnicity, but also economic sociology and culture — seem accordingly to have their best specialist-focused journal be evaluated less strongly, which makes sense if the very best work that might otherwise be published there is instead scoring in top generalist venues. The table below shows the journals ranked by their all our ideas final score, with the numerical win rate in parenthesis next to the journal name. It also shows the “can’t decide” outcome, a useful but imperfect measure of awareness including responses like “I don’t know enough about journal x” or “I don’t know enough about either journal” but also “I like them both.”

image2.png

There are not real shocks in the second tier, but perhaps a few moments of some interest. For example, there have been speculations about the relative position of Social Forces and Social Problems as well as the distance between either of them and the Big Two. Here while Social Forces comes clearly below ASR/AJS, there is clear daylight also between SF and any other journal below it. In direct head-to-heads, Social Forces lost 100% of comparisons to AJS/ASR and won 100% of the comparisons with Social Problems. Conversely, Sociological Theory is scored somewhat ahead of Theory & Society (77.6 vs 73.9). Yet these two only went head-to-head seven times, with split results (Sociological Theory scores ahead because of its better overall win rate).  On the whole, we caution against making too much of small rank or score differences: A difference of 30 points probably reflects a widely shared understanding of distinction but a difference of two points could shift with just a few more data points. 

If you are interested in seeing how journals fared against specific other journals, check out the win-loss matrix at the end of this post. Reading across a row, green squares indicate the row-head journal tends to win against its column competitor, while red squares indicate it tends to lose.

Also noteworthy is that a number of the newer and online-only journals appear to have a favorable reputation already, despite being in some cases too new to have an impact factor. Sociological Science comes in in the #22 position overall with 351 wins vs 195 loses and only 58 nonvotes. Socius also fares reasonably well, ranking #32 with 297 wins vs 249 losses and 75 nonvotes. It is worth acknowledging that an online sample is probably generally more favorably inclined to online journals, however response patterns don’t seem to suggest these results are due to particular users: out of the 168 different users who rated Sociological Science, 120 rated it only 1-2 times while only four rated it ten times or more (and nothing in their patterns suggests anything untoward).

Interestingly, the relative status ranking among generalist journals seems quite similar when compared to Dan Hirschman’s 2017 post on Generalist Rankings, suggesting that neither time nor the presence of head-to-heads with specialist journals has varied that internal status order much. There’s a lot of stability at the top, and somewhat more noise at the bottom, as one would expect. 

image5.png

Journal Clusters

Dustin created a nifty dendrogram to help visualize clusters of journals based on similarities in which other journals they win and lose against. This is conceptually different from raw ranking hierarchy, though you’ll see some overlaps emerge. Cluster analysis compares similarities in the pattern of wins and losses, grouping each journal into fewer and fewer clusters based on whether journals tend to lose or win against the same journals. Dendrograms visualize this progressive clustering as a tree structure. Then, we used a standard method to determine an optimal number clusters, which was around 4-5. In the visualization we highlighted five clusters, with the number in parentheses next to each journal indicating its overall rank position by reputational score.

image4.png

First, at the top AJS and ASR are in a class of their own. Next we have a cluster comprised of Social Forces, Social Problems, and Demography. The dendrogram then identifies a group of 22 journals that approximate the composition of our rank-order second tier discussed above. Similar to the rank-order tier, this third cluster is comprised mostly of theory, methods, other generalist, and the top specialty journals for some areas (Gender, Family, Organizations, Education, Networks, Race, Economic, Medicine, Culture, Social Psychology). The fourth cluster is dominated by second-journals in the above specialties and top-journals for other specialties (Religion, Urban, Mobilization, Politics, and Qualitative Methodologies), including some whose best work tends to be published in top generalists. It also includes a smattering of other generalist journals (Socius, Sociological Quarterly, Sociological Perspectives). The final cluster is a bit harder to interpret, but broadly, we do not think this should be understood as a “bottom” tier. Instead these are lesser-known journals, meaning it brings together journals that have a low impact as well as some that are high-impact within their area but evidently quite unknown to others outside of it (e.g. Criminology, World Development, International Migration Review, Journal of Policy Analysis and Management).

On Gut Reputation vs Impact Factors

There are some places where this reputational measure is especially not aligned with journal prestige as measured by citation indices. This could be interpreted to mean that our reputational measure is faulty, however instead we think it sheds some useful light on ways in which journal impact factors fail to capture sociologists’ status perceptions (if you are inclined to think the data is faulty, feel free to mosey on and skip this section).  Indeed some Twitter users questioned: why not just use existing journal impact factors?

First, there are well-known issues that any measure necessarily captures certain aspects one really cares about (typically readily quantitative measures) while excluding others that we care about but which are more laborious to capture. Then there’s also Goodhart’s Law: “Once a measure becomes a target, it ceases to be a good measure.” Even if intentional citation-hacking isn’t going on, there may be widely divergent density of citation practices in different fields that can cause some subfields to appear much higher on citation indices than other fields, even if that does not reflect the relative reputational appraisals.

If you look at the journals where the reputational ranking is dramatically undervalued vis-a-vis the impact factor ranking, the patterns suggest that sociologists whose research speaks to interdisciplinary fields have work to do to communicate the value of our placements to sociology colleagues. Eyeballing suggests that sociologists are doing boundary work, sometimes quite strongly penalizing journals that are viewed as being associated with another discipline or not clearly dominated by sociologists.

image3.png

Interdisciplinary fields or sociological research areas also strongly associated with other disciplines — e.g. politics, organizations, consumption, and development — seem to be some of the biggest victims of this. Administrative Science Quarterly (ASQ) and Academy of Management Journal (AMJ) are the only two journals on the list with an impact factor higher than ASR (8.024 and 7.191 respectively), but reputationally they are ranked #14 and #28. This is despite the fact that well-known organizational sociologists have regularly published in ASQ throughout its 50 year history. The Journal of Consumer Research is another venue in which sociologists of consumption often publish, and it fares even worse: it is the 6th ranked journal by impact factor (4.7, slightly higher than AJS), but reputationally ranked 86th…just seven spots from the very bottom. World Development echoes a similar story. Well-regarded as arguably the best interdisciplinary journal publishing work on development, it has the 8th highest impact factor (3.905) but is is ranked 77th reputationally. As we might expect, these discrepancies seem to be driven by a lack of awareness: the number of times a journal was in a “can’t decide” vote outcome is a significant predictor of the negative difference between its rank on our reputational measure and its impact factor ranking.

On the flip side, some journals are reputationally ranked well above where their impact factor would place them. Mathematically it therefore makes sense that this is led by a pack of journals who do not have an impact factor listed, typically because they are too new, including open-access, online journals such as Sociological Science (2014) and Socius (2015) as well as ASA section journals Sociology of Race & Ethnicity and Sociology of Development, both first published in 2015, and the Southern Sociological Society’s Social Currents (2014). Among journals that do have an impact factor, some still had reputational rankings that vastly out-performed their impact factor ranking, with Sociological Methodology (+39 rank spots), Poetics (+35), and City & Community (+35) leading the pack (see Table). 

My first interpretation of which journals get this positive reputational bump relative to impact factors is that they tend to be comprised of generalist or generalist-adjacent journals, as well as top specialty journals for specialties that have relatively large memberships in ASA but do not (yet) have a specialty journal that is clearly in the second tier. That group of specialties on our list includes Race (Ethnic & Racial Studies), Urban (City & Community), Political or Social Movements (Mobilization) and Comparative Historical (Social Science History). One interpretation is that well-regarded Theory and Methods journals may not be viewed entirely as equivalent specialty journals but also as generalist-adjacent journals, because they frequently contain content that is both empirical and theoretical. The head-to-head format of the survey could lead generalist and generalist-adjacent journals to get a boost by resolving the apples-to-oranges cognitive challenge that can arise when specialty journals are pitted against each other. That boost may be a survey construction artifact but it may also reflect real tendencies in how sociologists use journal placement to infer work qualities, exhibiting a relatively strong generalist bias except in cases of clearly known specialty journal excellence (as shown by, say, Demography or Gender & Society).

What Should I Do With This Information?

I’m hoping if you are someone who doesn’t care at all about this or feels some moral outrage about the idea of status rankings, that you have gone and had yourself a nice cup of coffee. Obviously no need to fuss yourself with this if it isn’t of interest to you. And at the end of the day, only you can say what is useful to you. But here I’ll sketch a few things that I find useful.

First, someone on Twitter surprised me by thanking me for updating Dan Hirschman’s reputational ranking of generalist journals, noting that he had found it helpful to refer to in his tenure and promotions materials. Given that administrators are often looking at huge impact factors from some fields, and that the reputation of a journal may be at-best imperfectly captured by impact factors, I’d be happy if anyone found this a helpful way to convey aspects of sociology journal reputation that are not well captured by citation indices.

As someone whose work is interdisciplinary, I (Erin) think there are also some important clarifying lessons. There are hard choices to be made about publishing in a high-impact journal (in the sense of high impact factors to impress Deans but also of genuinely well regarded by their interdisciplinary community of scholars) but which are sometimes very poorly known or regarded within sociology. I’d be lying if I didn’t say the rank results of outlets like World Development and Journal of Consumer Research didn’t give me pause about whether it was worth the effort to publish in those locations compared to going down to a second- or third-tier generalist. Perhaps more programmatically I think the results also signal some important work for some sociologists to do to better spread the word about journals where our subdisciplinary community of sociologists publishes and thinks highly of to increase awareness among our colleagues.

This field level view may also raise some interesting provocations for ASA sections that run or are contemplating a section journal, especially about potential quality-quantity tradeoffs in an already crowded field. Alternatively, some fields seem particularly under-served on this list: Economic sociology is the #10th most populous ASA section but seems to have Socio-Economic Review (the SASE journal) as really the only specialty journal option. 

Erin McDonnell is Assistant Professor of Sociology at the University of Notre Dame. Dustin Stoltz is a PhD student in sociology at the University of Notre Dame and next year will be Assistant Professor of Sociology at Lehigh University.

image1.png

Author: Dan Hirschman

I am a sociologist interested in the use of numbers in organizations, markets, and policy. For more info, see here.

5 thoughts on “devilish details: a reputational ranking of generalist and specialist sociology journals”

  1. Seems like a good place to mention the San Francisco Declaration on Research Assessment (DORA). It’s main target is the Impact Factor, but it also applies to other journal-based metrics.

    The first general recommendation of DORA is:

    1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

    The first recommendation for researchers is this:

    15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.

    You can read the full declaration, and sign, here: https://sfdora.org/.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.