the ethics of that Facebook study

I haven’t followed the Facebook study kerfuffle in any detail, nor have I looked at the study itself. But the ethics of the study have really bothered folks. I do think Facebook is incredibly creepy for the information and power they possess, so I can see why folks’ creepometers would be super-sensitive to Facebook experiments.* Still, I don’t get the freakout. Or, at least, there are existing research designs behavioral scientists use that I’ve already decided that I’m okay with, so it’s hard for me to understand the outrage about the Facebook experiment. Three examples:

1. Experiments that use outright deception. Subjects are straight-up lied to, including elaborate scenarios where people they believe to be fellow subjects are really researchers, etc.. They’ve only given a non-specific informed consent. They are often manipulated in ways that are intended to elicit stronger emotional states than whatever mild melanchoia/happieness Facebook was going for. I’ve never understood why debriefing afterwards is supposed to be such a great ethical cleanser for doing this, except that it generally underscores the point that _usually_ people aren’t too upset.

2. Audit studies. Major audit studies deceive subjects, do not use informed consent, do not debrief afterward, and waste people’s time by having them do call-backs for bogus job/housing applicants.

3. Affective neuroscience studies. Have you seen some of the photos folks are shown in those studies? OK, so maybe I’m just showing off my super-squeamish side here, but even though people sign consent forms, it is far from clear they know that is what they’ve signed on for.

* That said, if you want to see some cute Australian animal photos taken by my beloved, etc., feel free to FB friend me.

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

18 thoughts on “the ethics of that Facebook study”

    1. Well, this, for example.

      (Although not sure why I’m linking to my old blog when I intentionally replaced it with a crap template to make it as low-profile as possible.)


  1. Just saw this 30 seconds after posting this comment over on orgtheory, so I’m going to copy/paste the same comment here:

    I think the reason many of us were insufficiently outraged by the NSA revelations is that we know we are being spied on and manipulated constantly by the large private companies and we feel powerless to do anything about it. With this scandal, I’m afraid my outrage is focused less on what FB did but on a) the fact that no legitimate academic researcher would ever get a project like this through an IRB; b) my IRB at least says directly that academics OUGHT to be held to higher standards than private businesses and the fact that a research project proposes to use personal information readily available on the Internet is not sufficient justification for an exemption if the IRB can think up some scenario in which a person might be hurt by that information including saying “well it may be public, but they should not have made it public”; c) my IRB at least requires all FB research to abide by FB’s “privacy statement” which basically says that FB owns everything on its pages and anyone else needs to get individualized consent to do anything with the content; d) now non-academic researchers who are not subject to the increasingly intrusive IRB regulations are intruding into publishing in academic journals. I’m not saying my outrage should be so narrowly focused, but it is.


    1. Way past my bedtime, but:

      my IRB at least says directly that academics OUGHT to be held to higher standards than private businesses

      In general — that is, not about Facebook — do you agree with this position? There’s a blog post waiting to happen.


      1. Responding both to you and to Gabriel’s comment below, I don’t think academic research should be held to higher standards than private business. If it is wrong to hurt someone, it is wrong. I agree with Gabriel’s point and the larger ethical issue that medical research is often (or was often) in danger of abusing the trust people have in their physicians when it blurred the line between treatment and research, but most social science does not involve that issue. The federal guidelines specifically exempt most social science from oversight (e.g. ALL survey research that does not identify people AND IN ADDITION survey or other research that does identify people but poses no meaningful risk of harm) but in practice, my campus (and many) insists that proposals require the same level of review to be declared exempt as not to be declared exempt and, although most exempt research actually gets declared to be exempt after this process, a significant fraction of projects that meet federal guidelines as exempt end up getting modified or blocked by the campus IRB, and my campus has been particularly aggressive in trying to block FB research. We have successfully defended the position that material you can see on line without a password or login is a “text” and therefore not a human subject at all (and therefore not in need of IRB clearance or review) but there have been IRB staffers who have tried to argue that even publicly-available textual materials like newspapers might require IRB review if the research identifies people.


      2. On one level this is fascinating in how it shows something that starts as coercive isomorphism, but goes well beyond the scope of the actual isomorphic demand as local internal constituencies overreach beyond their mandate. (This is precedented in the literature, for example Dobbin and Sutton argue this was the origin of diversity management which regardless of the merits they argue was originally about meeting federal guidelines but eventually went well beyond ensuring compliance, especially during the lax enforcement of the Reagan EEOC).

        On another level it’s just horrible for you and your colleagues/students and you have my sincere condolences. Unfortunately it doesn’t seem isolated as I’ve heard similar horror stories about historians. That IRB staffers overreach to the point of trying to ensure anonymity for people already in the public record suggests to me that they themselves might make an appropriate subject pool for experiments involving electric shocks, untreated diseases, etc.


  2. A few comments… There appear to be methodological questions about the study. See, for example,

    Also, there are, to me, clear ethics violations here. For example, subjects were not properly debriefed on the experiment in a timely fashion afterwards and the experiment was not properly reviewed by an appropriate human subjects body (though I have seen claims to the contrary…I am not sure that Facebook’s institutional review board would have been making decisions in the best interest of subjects). Knowing what I have read to date, it appears that the researchers and the PNAS editor were not operating with a view to what I believe to be minimal ethical or scientific standards for conducting and publishing research of this type.

    So, yes, I think there is reason for serious concern, and further investigation into both the research and the process by which the piece was published. But, maybe that’s just me.


    1. Even farther past my bedtime, but:

      I get the argument that there are procedures we have about ethics and that this research appears at variance with those procedures. But that seems different to me than the actual ethics. I mean — what’s would be the actual ethical accomplishment of debriefing in this case? And filing for human subjects approval is more of an ethical means than an ethical end.


      1. Exactly. While some objections are on the merits, the objections in terms of informed consent, debrief, etc objections are essentially of a character that the proper incantations were not recited to appease the human subjects gods. Those kind of things make a lot of sense for medical trials, a pretty good amount of sense for social science research on sensitive topics or vulnerable populations, and basically no sense whatsoever for unobtrusive A/B testing of a relatively minor manipulation with predictably minimal effects sizes. But yet if you don’t check off the boxes there is much wailing and gnashing of teeth.

        Or as Meyer and Rowan might put it, “IRB as myth and ritual.”


  3. I was thinking of “research design” as something more along the lines of a social psych experiment from the 60s(?) that I dimly remember. The subjects, straight males, were led to believe that their responses indicated that they were homosexual. I think of Goode’s behavior as less “research design” and more “stuff happens.”


  4. This is just an aside, but between the Hurricanes/himmicanes study and now the Facebook study, Susan Fiske is having a really bad month. Not as bad as Jim Wright had during the Regnerus affair, but still bad. Talk about a thankless job.

    Jay: you mean like Willer’s 2013 masculine overcompensation study? IIRC, his experimental manipulation didn’t involve getting college freshmen and sophomores to think they were gay, but it did involve tricking young men and women into thinking they were more or less masculine.


    1. I don’t trust my memory on this, but I think it involved some rigged apparatus that gave physiological feedback (maybe like a GSR). As I’ve said elsewhere, one of the major intellectual forbears of experimental social psychology seems to have been “Candid Camera,” and as in that show, the designer’s real interest seems to have been in the creation of a clever scenario – like a phony sexuality measure or a talking mailbox. The reactions of the subjects was secondary.

      The term “experimental manipulation” is apt. The researcher’s goal is to manipulate subjects, not just variables. I think that people resent the FB study because it tried to manipulate people’s emotions.


  5. Jeremy — wouldn’t this fit in your category of statistically significant findings due to massive sample size? I don’t know anything about this world of research, but is a 0.04% change in posts seems negligible to influence the things that we care about in the world.

    I can understand why Facebook cares — and the authors make a claim regarding why this is important in the conclusion — but I’m not sure why social scientists would care that much.

    If, on the other hand, they could show that posts with female hurricane names led to fewer apprehensive posts than male hurricane names, THEN we would be onto something big!


    1. I MTweeted something last night about how the effect size in the Facebook study was literally equal to adding a hair to the average man’s height. But then somebody pointed out to me that the FB study was only trying to induce a statistically detectible effect and not actually a large one. It was a good point: I mean, who knows what would happen if Facebook really tried hard to making certain random users happy or unhappy by manipulating their Facebook feeds.


    2. This study was of the psychological variety, not the sociological, and in general psychologists care very little about effect sizes. It’s true – go ask a friendly psychologist. So a study with a negligible effect size, like this one, wouldn’t be out of the norms for the discipline.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.