performance management, threshold effects, and asa sections

Every year, the end of September brings a peculiar class of emails from American Sociology Association section chairs and membership committees. ASA sections (e.g. “Economic Sociology,” “Sex and Gender,” etc.) organize much of the activity at the annual meetings. Each section is awarded a certain number of sessions based on the size of its membership on September 30th. If you have 399 members, you get 2 sessions; if you have 400 members, you get 3, and so on. As you would expect, sections routinely scramble in September to try to exceed the next threshold. The form of this scrambling includes offers to subsidize graduate student members (who pay a much smaller amount in dues, but “count” the same towards the session thresholds), book raffles, and even drawings to win coffee with senior scholars. After receiving another such email, I got curious about the effectiveness of these strategies. ASA conveniently posts membership data back to 2009 on its website, and so it’s easy to plop that data into R and produce a quick histogram of year-end membership counts for 2009-2013.*

HistogramofASASections

As expected, we see sharp jumps around major cutoff points: 300, 400, 600, and 800. We see similar trends when looking at publicly traded firms’ earnings data vs. analyst forecasts, or when looking at the size of courses offered by universities trying to game their USNWR ranking (see Espeland and Sauder’s work). So, it seems like all the emails are working – at least, working for the sections trying to get their numbers just above the threshold. Whether or not this particular system is collectively rational I will leave for you all to judge.**

* Thanks to the @ASANews twitter account for the links!
** One clunky but effective solution would be to transition from a pure threshold system to one that awards the final session to each section probabilistically based on how far past the previous threshold it went, with each member being worth about half a percent of a section.

kurzman, missing martyrs

I’m teaching my colleague Charlie Kurzman’s book The Missing Martyrs for the second time this semester in my Sociology 101 course. It’s a great book, and the students appreciate both its counterintuitive (to them) claims and its accessibility. (It doesn’t hurt that the book opens with a recounting of the all-but-forgotten botched attack on UNC’s campus in 2006.) Continue reading “kurzman, missing martyrs”

his honor wants more truck drivers

Our governor, bless his heart, has come out with his latest education-is-overrated statement:

“We’ve frankly got enough psychologists and sociologists and political science majors and journalists. With all due respect to journalism, we’ve got enough. We have way too many,” McCrory said to laughter from the audience.

He said we have too many lawyers too, adding that some mechanics are making more than lawyers.

“And journalists, did I say journalists?” he said for emphasis.

My favorite neocon friend/mentor/correspondent wrote me to ask:

What say you to your Governor about this? In fact, he is always partly right. In fact, your Univeristy [sic] Entitled Ones are always more wrong than right.

Here’s my answer:

Continue reading “his honor wants more truck drivers”

the jellybean problem

I’m not as big a fan of xkcd as many geekly friends are, but, in my mind, this cartoon remains the most incisive depiction of the basic problem of low-sigma null hypothesis significance testing in practice.

(Was reminded of it because of Matt’s comment yesterday about how he uses Twenty Questions as an example while teaching. While this use of twenty questions isn’t at all like what Matt was saying, the jellybean comic is the idea that you get to ask twenty hypotheses, the universe will probably lie to you once, and as long as you get one “YES” you can publish it like it is the only question you ever asked.)

replicating the future

Gelman post on meta-analysis of the Daryl Bem research on precognition (yes, precognition):

The ESP context makes this all look like a big joke, but the general problem of researchers creating findings out of nothing, that seems to be a big issue in social psychology and other research areas involving noisy measurements. … I have a feeling that the authors of this paper think that if you have a p-value or Bayes factor of 10^-9 then your evidence is pretty definitive, even if some nitpickers can argue on the edges about this or that. But it doesn’t work that way. The garden of forking paths is multiplicative, and with enough options it’s not so hard to multiply up to factors of 10^-9 or whatever. And it’s not like you have to be trying to cheat; you just keep making reasonable choices given the data you see, and you can get there, no problem. Selecting ten-year-old papers and calling them “exact replications” is one way to do it.

I think the parapsychology research is actually extremely useful, especially if one is willing to take as incorrigible the proposition that parapsychological phenomena aren’t real. Because then parapsychology serves as a kind of control group for science practice, and what’s striking about the Bem research is how much it looks like ordinary psychological science–even psychological science that goes above and beyond the norm–and yet the findings are what they are.