people deserve respect; viewpoints don’t

I have not been following the story of Steven Salaita and the University of Illinois closely, but the details coming out are troubling, to say the least. The basic facts are clear and I think undisputed: Salaita was offered a job by the American Indian Studies program at Illinois, which he accepted. Salaita resigned from his existing position and prepared to move to Illinois, when he was told that he would not in fact be hired because the Chancellor refused to send his appointment to the board of trustees for approval. The Chancellor’s refusal seemed connected to Salaita’s statements on Israel-Palestine on Twitter, and his criticisms of the phrase “support our troops” published in Salon.*

Today, more details have come out in the form of emails between the Chancellor and various parties including, perhaps most disturbingly, the fundraising wing of the University.

Continue reading “people deserve respect; viewpoints don’t”

sociologists statement on Ferguson

Last week, a group of 10 sociologists gathered at ASA to discuss the terrible situation in Ferguson.* Following that meeting, the group wrote up draft text for a statement. Here’s how they diagnose some of the larger problems:

Law enforcement’s hyper-surveillance of black and brown youth has created a climate of suspicion of people of color among police departments and within communities. The disrespect and targeting of black men and women by police departments across the nation creates an antagonistic relationship that undermines community trust and inhibits effective policing. Instead of feeling protected by police, many African Americans are intimidated and live in daily fear that their children will face abuse, arrest and death at the hands of police officers who may be acting on implicit biases or institutional policies based on stereotypes and assumptions of black criminality. Similarly, the police tactics used to intimidate protesters exercising their rights to peaceful assembly in Ferguson are rooted in the history of repression of African American protest movements and attitudes about blacks that often drive contemporary police practices.

If you are interested in signing the statement, you can do so here.

* I was not at the meeting, and thus cannot provide any details beyond what’s in these documents. Links to the petition were circulated by Alison Gerber, who can perhaps answer queries.

everything you wanted to know about bad citation practices

Spinach, it turns out, is not an especially good source of iron. As the story goes, people believe it’s a good source of iron because of a misplaced decimal point in a publication in the 1930s, reporting the iron content of spinach as ten times its actual level. But this story is itself apocryphal, as Ole Bjørn Rekdal wonderfully narrates in a cheeky and insightful piece in the most recent Social Studies of Science, Academic Urban Legends.

Rekdal traces the origins of the spinach-decimal-point myth and uses the occasion to catalog bad citation practices, including citing secondary sources for a point made by an original source without verifying the original source, citing an original source instead of a secondary source but relying on the secondary source’s interpretation, and more. Rekdal also traces the urban legend that most academic papers are never cited back to a 1980s study that actually found no such thing. I highly recommend the entire short piece, it’s funny and surprising throughout. For example, Popeye never claimed that spinach made you stronger because it had a lot of iron, and Popeye’s creator apparently had vitamin A in mind instead!

Continue reading “everything you wanted to know about bad citation practices”

okcupid is the new facebook? more on the politics of algorithmic manipulation

OK Cupid’s excellent blog just posted the results of a set of experiments they conducted on their own users. The post is framed in explicit defense of similar practices at Facebook:

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

In this post, I want to engage with the above argument in the context of OKC’s own manipulation.
Continue reading “okcupid is the new facebook? more on the politics of algorithmic manipulation”

junior theorists symposium 2014

The final schedule for the 2014 Junior Theorists Symposium has just been released. If you’re going to be in the Bay Area the day before ASA (Friday, August 15), and have not already committed to one of the other pre-conferences, stop by 60 Evans Hall at the University of California (Berkeley) to see some amazing junior theory in action! If you have any questions, or would like to RSVP, just send an email to Jordanna Matlon and myself at juniortheorists@gmail.com.

all persons are fictional

In the wake of the Hobby Lobby decisions, there have been renewed discussions of corporate personhood. The argument is relatively simple: the 19th century Supreme Court made a mistake when it created the legal fiction that corporations are persons. I don’t want to get into that argument here. Instead, I want to make a slightly different argument: all persons are fictions.

Continue reading “all persons are fictional”

experimental vs. statistical replication

In the context of all of the debates about replication going on across the blogs, it might be useful to introduce a distinction: experimental vs. statistical replication.* Experimental replication is the more obvious kind: can we run a new experiment using the same methods and produce a substantially similar result? Statistical replication, on the other hand, asks, can we take the exact same data, run the same or similar statistical models, and reproduce the reported results? In other words, experimental replication is about generalizability, while statistical replication is about data manipulation and model specification.

On the one hand, sociology, economics, and political science all have ongoing issues with statistical replication. The big Reinhart and Rogoff controversy was the result of an attempt to replicate a statistical finding that revealed some unreported shenanigans in how cases were weighted, and that some cases were simply dropped through error. Gary King’s work on improving replication in political science aims at making this kind of replication easier, and even turning it into a standard part of the graduate curriculum. Similarly, I believe the UMass paper (failing to) replicate Reinhart and Rogoff emerged out of a econometrics class assignment (e.g.) that required students to statistically replicate a published finding.

On the other hand, psychology seems to have a big problem with experimental replication. Here the concerns are less about model specification (as the models are often simple, bivariate relationships) or data coding, but rather about implausibly large effects and “the file drawer problem” where published results are biased towards significance (which in turn makes replications much more likely to produce null findings).

Both of these kinds of replication are clearly important, but they present somewhat different issues. For example, Mitchell’s concern that replication will be incompetently performed and thus produce null findings when real effects exist makes less sense in the context of statistical replication where the choices made by the replicator can be reported transparently, and the data are shared by all researchers. So, as an attempt at an intervention, I propose we try to make clear when we’re talking about experimental replication vs. statistical replication, or if we really mean both. Perhaps we might even call the second kind of replication something else like “statistical reproduction”** in order to highlight that the attempt to reproduce the findings are not based on new data.

What do you all think?

* H/T Sasha Killewald for a conversation about different kinds of replication that sparked this post.
** Think “artistic reproduction” – can I repaint the same painting? Can I re-run the same models and data and produce the same results?