2019 junior theorist conference call for papers

​The 13th Junior Theorists Symposium (JTS) is now open to new submissions. The symposium will be held in New York, New York on August 9th, 2019. The JTS is a one-day conference featuring the work of emerging sociologists engaged in theoretical work, broadly defined. Sponsored in part by the Theory Section of the ASA, the conference has provided a platform for the work of early career sociologists since 2005. We especially welcome submissions that broaden the practice of theory beyond its traditional themes, topics, and disciplinary function.

It is our honor to announce that Isaac Reed (University of Virginia), Amin Ghaziani (University of British Columbia) and Adia Harvey Wingfield (University of Washington in St. Louis) will serve as discussants for this year’s symposium. In addition, we are pleased to announce an after-panel entitled “Teaching Theory: Debates, Tensions, and Future Directions,” to feature Robin Wagner-Pacifici (The New School), Stefan Timmermans (University of California, Los Angeles), Shamus Khan (Columbia University), and Fabio Rojas (Indiana University, Bloomington). The symposium will also feature a talk by 2018 Junior Theorists Award winner Erin McDonnell (University of Notre Dame).

We invite all ABD graduate students, postdocs, and assistant professors who received their PhDs from 2015 onwards to submit up to a three-page précis (800-1000 words). The précis should include the key theoretical contribution of the paper and a general outline of the argument. Successful précis from recent year’s symposium can be viewed here. Please note that the précis must be for a paper that is not under review or forthcoming at a journal.

As in previous years, in order to encourage a wide range of submissions, we do not have a pre-specified theme for the conference. Instead, papers will be grouped into sessions based on emergent themes and discussants’ areas of interest and expertise.

Please submit your précis via this Google formFauzia Husain (University of Virginia) and Madeleine Pape (University of Wisconsin-Madison) will review the submissions. You can contact them at juniortheorists@gmail.com with any questions. The deadline is February 11, 2019 by 11:59PM EST. By mid-March we will extend up to 12 invitations to present at JTS 2019. Please plan to share a full paper by July 21, 2019. Presenters will be asked to attend the entire symposium and should plan accordingly.

Finally, for friends and supporters of JTS, we ask if you might consider donating either on-site, or via Venmo (handle @JTS2019, email address juniortheorists@gmail.com). If you are submitting a proposal to JTS 2019, we kindly ask that should you wish to donate, you only do so after the final schedule has been announced.


about taking criticism

A PDF version of this post is available at SocArXiv

How do you respond when someone criticizes you? What if you think that criticism is unfair or inappropriate? What if you think the critic has a good point and it makes you feel really bad about yourself? This essay is about constructive ways for responding to criticism about how your style as a person of power or privilege may be hurting others in your teaching or advising. Along the way it addresses the broader problems of taking criticism in general and of cultural differences in interaction styles. The punchline is about trying to be who you are (no personality transplants) in a way that respects both yourself and others and helps make your environment feel inclusive for more marginalized people.

Many professors think that it is important for students to be able to take academic criticism as a normal part of learning to be an academic, but are nevertheless outraged at anyone expecting them to take criticism about the way they give criticism to others, if you follow my point. Many people view themselves as pro-feminist pro-minority pro-queer liberal social justice advocates and are deeply hurt and offended at being told that their personal style or comments are viewed by others as domineering or racist or sexist or homophobic or classist or patronizing or demeaning. Continue reading “about taking criticism”

about spousal hires

I’ve talked to a lot of people over the years about how to handle the “two body” problem where your spouse needs a job, too, and your goal is to live in the same house as your spouse.

I can explain what I know about the “rules of the game” regarding spousal hires, and also give my advice about the personal work you need to do to be ready for this. The spousal hiring process usually creates severe stresses on a relationship and I believe it is important to know your priorities or you are at risk of being chewed up. The post is long and has 3 sections: (1) About timing of the discussions; (2) Institutional rules, explaining that these vary from not helping spouses at all at one extreme to providing jobs for spouses at the other, with most falling somewhere in the middle; and (3) The need for spouses to talk honestly with each other (but not necessarily outsiders) about their priorities.  Continue reading “about spousal hires”

When Republicans Opposed the Free Speech of David Duke

less trust, moore verification: attention checks reveal errors in alabama poll data

Guest post by Nathan Seltzer

In the days following the publication of a Washington Post article that detailed allegations of sexual abuse against Roy Moore, Emerson College Polling released an election poll of Alabama voters that showed Moore maintaining a 10-point lead over his opponent Doug Jones, 55%/45%. This poll received sustained national press and influenced perceptions of the Alabama senate race since it was one of the first polls to be released after the Roy Moore allegations. The Emerson College Poll was conducted using survey data collected over the internet and by landline phone.

 In my working paper, “Less Trust, Moore Verification: Determining the Accuracy of Third-Party Data through an Innovative Use of Attention Checks,” I analyze raw data from this poll and find irregularities in the internet sample that might suggest that the respondents were not properly sampled by the data vendor that administered the survey, Opinion Access Corp., LLC.

As researchers increasingly rely on internet data vendors to acquire respondents for polls and surveys, I argue for the necessity of proactively verifying the accuracy of third-party data. In the paper, I detail how researchers can use survey “attention checks” to determine whether data vendors have provided samples that match their requested sampling frame. In the example below, I repurpose two pre-existing questionnaire items from the November 13 Emerson Poll to verify the accuracy of the sample provided by Opinion Access Corp.

Verifying Samples through A Priori Expectations of Variable Distributions

To verify whether the internet sample was comprised of valid Alabama respondents, I examined the joint frequency distribution of two overlapping geographic variables in the dataset: county of residence and US congressional district.

Alabama counties are nested within congressional districts, although there are several counties that overlap with two or three congressional districts (map here). As a result, we should expect that congressional districts are non-randomly distributed within counties. The a priori expectation would be that most counties should only have respondents in one congressional district. Additionally, we should expect respondents to correctly match their county and congressional district – there should be no ambiguity with exception of the possibility of minimal respondent error.

In the figure below (Figure 2 in the paper), I graph the joint frequency distribution of respondents by their counties and congressional districts for both the internet sample and the IVR phone sample. The rows of the graph correspond to county of residence while the columns correspond to the respondents’ specified congressional districts. The dark blue boxes represent clusters of one or more respondents, while the light grey boxes represent no respondents. Importantly, the red x-marks indicate valid responses that correctly match counties to congressional districts; all other cells in the heat map represent illogical and invalid county-district pairs.


Heat Map Depicting Joint Distribution of Counties of Residence and Congressional Districts for Respondents in the Internet and IVR Samples.

Notes: Correct Match refers to valid/logical matches for counties and congressional districts. All other cells represent invalid/illogical county-district pairs. Blue cells refer to whether one or more respondents indicated that they lived in the corresponding county and congressional district.

While the IVR phone sample matches our a priori expectations for how congressional districts should be distributed within counties, the internet sample does not. In fact, 117 out of the 324 internet respondents (36.1%) were unable to accurately match their county of residence to their US congressional district.

In Autauga county, for instance, which is in central Alabama and District #2, none of the respondents from the internet sample selected District #2. Instead, they indicated that their congressional district was either District #1, District #3, District #4, or District #7, all of which are incorrect.

It is unclear why respondents in the internet sample failed to correctly match their congressional districts to their county of residences. In the online questionnaire, respondents were provided a map that transposed congressional districts over county boundaries, and were then asked to indicate their congressional district. This should have been a simple task for respondents if they had knowledge of where they lived within their state of residency. To be sure, it is possible that the divergence in the joint distributions shown in the internet and IVR phone samples might have a practical explanation that is not easily inferred from the publicly-released survey methodology. But when internet error rate surpasses a third of all respondents, such an explanation seems implausible.

Less Trust, Moore Verification

Third-party internet panel vendors provide a cost-effective and time-efficient option for conducting survey research. However, data vendors often have aims and motives that do not align with academic researchers. By default, researchers should be skeptical of the accuracy of data provided by third parties. Ultimately, it is the researcher’s responsibility to determine the fidelity of the data they use in their analysis.

Although the aim of the paper is not to predict the outcome of an electoral contest, the removal of this poll from aggregate polling averages might indicate a tighter Alabama senate race than previously understood. Emerson College Polling released an additional poll that surveyed support for Roy Moore and Doug Jones in the Alabama senate race on November 28 that similarly relied on respondents acquired through Opinion Access Corp. If the same irregularities observed in the November 13 poll are present in the more recent poll, then political observers should interpret the results with the understanding that a substantial number of respondents interviewed might be invalidly included.

Nathan Seltzer is a PhD student in Sociology at the University of Wisconsin-Madison and a trainee at the Center for Demography and Ecology.

are adjuncts asked to write too many reference letters?

A Twitter exchange in response to my post saying that mediocre students deserve reference letters raised the problem of adjuncts’ reference-writing woes. Some adjuncts apparently get asked to write a lot more letters of reference than many full professors.  Some of the people who are being asked to write a lot of letters are contingent faculty who are already being overworked for poverty wages and it seems particularly unjust for them to be expected to shoulder this burden. My Twitter exchange was with an adjunct who teaches in five different departments and has a post doc besides, so I’m going to assume that the wage per course for this person is low. There are, of course, other adjuncts who are in regular non-contingent positions for reasonably good wages whose situation is somewhat different.

Writing a reference letter for an undergraduate takes at least 3 hours. Continue reading “are adjuncts asked to write too many reference letters?”

do B-average undergrads deserve letters of reference?

Once again there are discussions  about writing letters of reference in my social media. Some people seem to believe that getting a letter of reference is a privilege that only the very best students deserve, and that instructors ought to put a cap on how many students they will write letters for. Some of the arguments are based on managing instructors’ workloads. Coming from the pro-student side, there are also people who argue that letters of reference should  always be excellent letters that can really help a student’s career, which would seem to imply that letter-writers should decline to write at all if their letter would be merely tepid. (See below for samples.) This latter discourse also seems to imply that all students are excellent, or at least deserve to be written about as if they are excellent. So it is a real question: Do undergraduates who have failed to form close relations with faculty deserve letters of reference? Do mediocre undergraduates deserve letters of reference? My answer to both is, yes.  Continue reading “do B-average undergrads deserve letters of reference?”