Like much of the sociology blogosphere, I’ve been following the debate over the recent Facebook emotion study pretty closely. (For a quick introduction to the controversy, check out Beth Berman’s post over at Orgtheory.) While I agree that the study is an important marker of what’s coming (and what’s already here), and thus worth our time to debate, I think the overall discussion could be improved by refocusing the debate in two major ways.
1. Facebook, not (just) the Facebook study.
For the most part, the debate has stayed at the level of the particular study, the (creepily) named Experimental evidence of massive-scale emotional contagion through social networks. It’s William Gibson’s world, we’re just living in it (and perhaps heading toward Philip K Dick’s). But the issues surrounding FB’s experimental manipulations are bigger than this study, much bigger. In fact, one common defense of the study is of the form, “FB does this all the time.” Zeynep Tufekci deals with this argument and usefully reorients us to the question of power, which is largely backgrounded in framing the issue as one of the ethics of a particular study rather than the emergence of new forms of widespread experimental manipulation:
To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!
That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.
I’m not focusing on this one study, or its publication, because even if Facebook never publishes another such study, the only real impact will be the disappointed researchers Facebook employs who have access to proprietary and valuable databases and would like to publish in Nature, Science and PNAS while still working for Facebook. Facebook itself will continue to conduct such experiments daily and hourly, in fact that was why the associated Institutional Review Board (IRB) which oversees ethical considerations of research approved the research: Facebook does this every day.
Read the whole thing. And then remember that this debate was kicked off by the publication of a particular study, but that doesn’t mean we have to confine our debate to this particular study. For example, I would love to see a discussion of the metaphor of ‘news’ in the FB “news feed.” In traditional news agencies – at least in the recent US – the editorial and advertising divisions were organizationally separated. While the walls between these divisions were never absolute, there was at least a pretense of independence. People are rightfully worried about the collapse of such walls with the decline of the newspaper business; I would argue people should be equally interested in building new walls or their analogs in online spaces. If FB is going to purport to provide me ‘news’ – even if it’s just news about my family and friends – I want FB to make some attempt to guarantee that such news is not intentionally manipulated to present me a biased view (of a product, an event, or even the aggregate emotional state of my network). I doubt FB is interested in issuing such a guarantee, and for the moment they don’t have to. They will instead happily rely on size, market power derived from network effects, and the generally low-level of understanding of the background algorithms that serve up the ‘news’ feed. For many people, the debate over this study was the first time they learned that FB does anything other than simply showing them the most recent (or perhaps most popular) of their friends’ updates. Let’s not bog that debate down in concerns over the details of this study.
2. Ethics, not the IRB.
Another trap the debate over the study has fallen into is the conflation of research ethics and IRB approval. I probably don’t need to convince the readers of this blog, or really any practicing social scientist, that IRBs have relatively little to do with ethics and rather a lot to do with heading off lawsuits and complying with regulatory mandates (for recent histories of the IRB, see Schrag 2009 and Stark 2011). As such, the debate over whether or not the FB study got IRB approval, should have gotten IRB approval, could have gotten IRB approval, and so on, seems to me a bit misplaced – and, as some seeming defenders of FB have noted, seems aimed at expanding the scope of IRBs. But as I’ve just tried to argue, the real issue here is not just this particular study but rather the broader ethics of the sorts of manipulations FB is making. We are and should be debating the question of consent, for example, but not in the narrow confines of ‘whether this passes IRB muster’ but rather what does consent mean in this context? What’s sufficient?
Relatedly, Jeremy recently posted a list of study designs that bothered him less than the FB study. But as we later discussed on Twitter (Not FB, of course!), ethical considerations for experimental research are about balancing costs and benefits, not just assessing the acceptability of a research practice tout court. Audit studies and outright deceptive psychological experiments may well impose a bigger cost/risk, but they also (potentially) yield clear, socially desirable knowledge (about, say, the persistence of gender and racial discrimination in hiring). The ethics of the FB study, if we are going to discuss it as a study in isolation, should be evaluated against its potential payoff – and payoff here meaning for some socially desirable goals, not FB’s bottom line.
Similarly, we should evaluate this design against potential alternatives. These alternatives might include Kate Crawford’s proposal for creating opt-in panels for A/B-style research on social networks, but also the simple suggestion that FB notify individuals after they have performed such a test. Of course, FB is unlikely to want to notify people that it’s been manipulating their emotions (or anything else about their feed). But that’s part of the point. FB’s desire not to inform its users about its manipulations is rooted in an understanding that users misrecognize FB’s control over what they see, and in a desire not to draw attention to its constant manipulations. That may make sense for FB’s bottom line, but it’s not an ethical justification for research. As Zeynep Tufekci notes, deciding that this sort of research is unethical and shouldn’t be published in the journals isn’t going to stop FB from doing it. But that’s no reason to simply throw up our hands and give up:
The alternative is not that they don’t publish and we forget about it. The alternative is that they publish (or not) but we, as research community, continue to talk about the substantive issue.
As academics and the research community, we should be the last people who greet these developments with a shrug because we are few of the remaining communities who both have the expertise to understand the models and the research, as well as an independent standing informed by history and knowledge.
Tl;dr: the problem with the debate over the Facebook study is that we’re debating whether or not a particular study should have been published, and not examining the broader politics of knowledge and manipulation in our brave new social networked world.