In the 1960s, Stanley Milgram conducted a study on conformity to authority that is now infamous among social scientists. The study was relatively straightforward. Participants would be asked to administer shocks to another human who had performed poorly on a test. They were told that doing so could help the poor performer learn to do better. If a participant resisted administering the shocks, a member of the research team would insist that the participant continue for the good of the research. The shocks increased in intensity over the course of the study, reaching a level that could be lethal. In reality, there was no one receiving these shocks, but a paid actor would pretend to be hurt, leading the participant to believe that they had caused real harm to another real person. As a surprise to the researchers, over half of participants administered the final “lethal” shock. The findings from this study are commonly used to explain how genocides are perpetrated. Milgram and his team argued that ordinary people are willing to commit incomprehensible acts of violence so long as someone in authority assures them it is the right thing to do.
I first encountered the Milgram study as an undergrad in an introductory psychology class. By the time I graduated, I learned about the study in at least three other classes. Each time, the discussion was essentially the same. Our professor would insist that the findings from the study are important, but that the study is unethical due to the harm it caused participants. That harm was described as the emotional trauma of walking around with the knowledge that you could—and would—murder another person if someone asked you to do so. There are other ethical issues as well, including the deception used by the research team and how difficult it was for participants to withdraw their consent to be in the study, but they were also tied back to that main concern: the weight on the conscience of a participant who administered that “lethal” shock.
As a professor, I was prepared to have the same discussion with my students in Science, Power and Diversity as we discussed research ethics. But when it came time to do so, I had a different perspective on the Milgram study that comes from my own work with perpetrators of sexual violence—and how hard it is to research them.
The seductive power of sensual charm survives only where the forces of denial are strongest. If asceticism once reacted against the sensuous aesthetic, asceticism has today become the sign of advanced art. All “light” and pleasant art has become illusory and false. What makes its appearance esthetically in the pleasure categories can no longer give pleasure. The musical consciousness of the masses today is “displeasure in pleasure” — the unconscious recognition of “false happiness.”
–Adorno, “On the Fetish-Character in Music and the Regression of Listening,” 1938
Jeff Guhin innocently posted to Facebook that “doing a lecture on Habermas is ridiculous.” He may well be right, for many different kinds of reasons. But in the (lengthy!) conversation that followed, two critiques were raised that I think deserve separate treatment. They are:
That much theory, including Habermas and, all the more so, his Frankfurt predecessors, is too difficult to read to make it worthwhile; and
Reading theorists like Habermas is really mostly about the history of social thought and has no payoff for empirical or analytical sociology.
I am going to take Dan’s invitation to consider one aspect of the polls that I don’t see getting a lot of attention right now, but that I think could be important: undecided voters could explain much of the polling error being discussed.
In other words, I don’t think that the polls were that wrong. I know that this view puts me in the minority, even among people who think about these things for a living. What we have, I think, is a failure to really consider how we should interpret polls given two very unpopular candidates and a possible “Shy Tory” effect where Trump supporters reported being undecided to pollsters.
O’Neil looks out at the land of big data and its various uses in algorithms and sees problems everywhere. Quantitative and statistical principles are badly abused in the service of “finding value” in systems, whether this be through firing bad teachers, targeting predatory loans, reducing the risk of employee turnover by using models that incorporate past mental health issues, or designing better ads to sniff out for-profit university matriculates. Wherever we look, she shows, we can find mathematical models used to eke out gains for their creators. Those gains destroy the lives of those affected by algorithms that they sometimes don’t even know exist.
Unlike treatises that declare algorithms universally bad or always good, O’Neil asks three questions to determine whether we should classify a model as a “weapon of math destruction”:
Is the model opaque?
Is it unfair? Does it damage or destroy lives?
Can it scale?
These questions actually eliminate the math entirely. By doing so, O’Neil makes it possible to study WMDs by their characteristics not their content. One need not know anything about the internal workings of the model at all to attempt to answer these three empirical questions. More than any other contribution that O’Neil makes, defining the opacity-damage-scalability schema to identify WMDs as social facts makes the book valuable.
Last weekend, Slate announced the use of social scientific tools similar to those used by campaigns themselves to anticipate results over the course of the day. Slate rejects, in editor-in-chief Julia Turner’s words, the “paternalistic” stance of the traditional media embargo on publishing results during Election Day.
Slate is making a bold move by ignoring the embargo, but in doing so they also appear to be ignoring the flaws of data science and a sacrosanct principle of both social science and journalism: skepticism.
Timothy Carney wrote an article earlier this week decrying what he calls the “rampant abuse of data” by pollsters and the press this election season. He faults North Carolina’s hometown polling company, Public Policy Polling (PPP), among others, for asking “dumb polling questions” such as the popularity of the erstwhile Cincinnati Zoo gorilla Harambe; support for the Emancipation Proclamation; and support for bombing Agrabah, the fictional country in which the Disney film Aladdin is set.
While I agree with Carney that many of the interpretations of these questions are very problematic (and I should note that I have used PPP many times to field polls for my own research), I think he’s wrong that these are dumb questions and that the answers therefore do not constitute “data.” Quite the opposite: asking vague and difficult-to-answer questions is an important technique for assaying culture and, thereby, revealing contours of public opinion that cannot be observed using conventional polling.
A couple of weeks ago I got in a friendly back-and-forth on Twitter with my friend and colleague Daniel Kreiss. Daniel was annoyed by this article, which purports to reveal why Mitt Romney chose Paul Ryan to be his running mate by deploying median-voter theory. Daniel’s frustration was this:
I love these studies – complicated models, and no one thought to ask former staffers what went into the decision. https://t.co/t99mfXhyUl