two problems with the facebook study debate

Like much of the sociology blogosphere, I’ve been following the debate over the recent Facebook emotion study pretty closely. (For a quick introduction to the controversy, check out Beth Berman’s post over at Orgtheory.) While I agree that the study is an important marker of what’s coming (and what’s already here), and thus worth our time to debate, I think the overall discussion could be improved by refocusing the debate in two major ways.

1. Facebook, not (just) the Facebook study.

For the most part, the debate has stayed at the level of the particular study, the (creepily) named Experimental evidence of massive-scale emotional contagion through social networks. It’s William Gibson’s world, we’re just living in it (and perhaps heading toward Philip K Dick’s). But the issues surrounding FB’s experimental manipulations are bigger than this study, much bigger. In fact, one common defense of the study is of the form, “FB does this all the time.” Zeynep Tufekci deals with this argument and usefully reorients us to the question of power, which is largely backgrounded in framing the issue as one of the ethics of a particular study rather than the emergence of new forms of widespread experimental manipulation:

To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!

That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.

I’m not focusing on this one study, or its publication, because even if Facebook never publishes another such study, the only real impact will be the disappointed researchers Facebook employs who have access to proprietary and valuable databases and would like to publish in Nature, Science and PNAS while still working for Facebook. Facebook itself will continue to conduct such experiments daily and hourly, in fact that was why the associated Institutional Review Board (IRB) which oversees ethical considerations of research approved the research: Facebook does this every day.

Read the whole thing. And then remember that this debate was kicked off by the publication of a particular study, but that doesn’t mean we have to confine our debate to this particular study. For example, I would love to see a discussion of the metaphor of ‘news’ in the FB “news feed.” In traditional news agencies – at least in the recent US – the editorial and advertising divisions were organizationally separated. While the walls between these divisions were never absolute, there was at least a pretense of independence. People are rightfully worried about the collapse of such walls with the decline of the newspaper business; I would argue people should be equally interested in building new walls or their analogs in online spaces. If FB is going to purport to provide me ‘news’ – even if it’s just news about my family and friends – I want FB to make some attempt to guarantee that such news is not intentionally manipulated to present me a biased view (of a product, an event, or even the aggregate emotional state of my network). I doubt FB is interested in issuing such a guarantee, and for the moment they don’t have to. They will instead happily rely on size, market power derived from network effects, and the generally low-level of understanding of the background algorithms that serve up the ‘news’ feed. For many people, the debate over this study was the first time they learned that FB does anything other than simply showing them the most recent (or perhaps most popular) of their friends’ updates. Let’s not bog that debate down in concerns over the details of this study.

2. Ethics, not the IRB.

Another trap the debate over the study has fallen into is the conflation of research ethics and IRB approval. I probably don’t need to convince the readers of this blog, or really any practicing social scientist, that IRBs have relatively little to do with ethics and rather a lot to do with heading off lawsuits and complying with regulatory mandates (for recent histories of the IRB, see Schrag 2009 and Stark 2011). As such, the debate over whether or not the FB study got IRB approval, should have gotten IRB approval, could have gotten IRB approval, and so on, seems to me a bit misplaced – and, as some seeming defenders of FB have noted, seems aimed at expanding the scope of IRBs. But as I’ve just tried to argue, the real issue here is not just this particular study but rather the broader ethics of the sorts of manipulations FB is making. We are and should be debating the question of consent, for example, but not in the narrow confines of ‘whether this passes IRB muster’ but rather what does consent mean in this context? What’s sufficient?

Relatedly, Jeremy recently posted a list of study designs that bothered him less than the FB study. But as we later discussed on Twitter (Not FB, of course!), ethical considerations for experimental research are about balancing costs and benefits, not just assessing the acceptability of a research practice tout court. Audit studies and outright deceptive psychological experiments may well impose a bigger cost/risk, but they also (potentially) yield clear, socially desirable knowledge (about, say, the persistence of gender and racial discrimination in hiring). The ethics of the FB study, if we are going to discuss it as a study in isolation, should be evaluated against its potential payoff – and payoff here meaning for some socially desirable goals, not FB’s bottom line.

Similarly, we should evaluate this design against potential alternatives. These alternatives might include Kate Crawford’s proposal for creating opt-in panels for A/B-style research on social networks, but also the simple suggestion that FB notify individuals after they have performed such a test. Of course, FB is unlikely to want to notify people that it’s been manipulating their emotions (or anything else about their feed). But that’s part of the point. FB’s desire not to inform its users about its manipulations is rooted in an understanding that users misrecognize FB’s control over what they see, and in a desire not to draw attention to its constant manipulations. That may make sense for FB’s bottom line, but it’s not an ethical justification for research. As Zeynep Tufekci notes, deciding that this sort of research is unethical and shouldn’t be published in the journals isn’t going to stop FB from doing it. But that’s no reason to simply throw up our hands and give up:

The alternative is not that they don’t publish and we forget about it. The alternative is that they publish (or not) but we, as research community, continue to talk about the substantive issue.

As academics and the research community, we should be the last people who greet these developments with a shrug because we are few of the remaining communities who both have the expertise to understand the models and the research, as well as an independent standing informed by history and knowledge.

Tl;dr: the problem with the debate over the Facebook study is that we’re debating whether or not a particular study should have been published, and not examining the broader politics of knowledge and manipulation in our brave new social networked world.

Author: Dan Hirschman

I am a sociologist interested in the use of numbers in organizations, markets, and policy. For more info, see here.

16 thoughts on “two problems with the facebook study debate”

  1. I hate Facebook so much that it is difficult for me to participate in this legitimate debate. I am glad that people are finally upset with Facebook over *something*. It is clear to any user that the so-called News Feed is not simply a time-ordered display of the posts of one’s friends, which is annoying in the first place. That they finally publicly documented one of the (I assume, many) manipulations of the algorithm they use to produce that feed might also be seen as a major triumph of transparency. If this is the moment when it occurs to us that perhaps Facebook can produce harm without any of us knowing about it, I am glad. I agree with Zeynep and you that we should keep attention on this, but I think we can be more imaginative about all the harm that a single corporation with all of our personal information might be able to do.

    Like

    1. There’s lots of things I don’t like about fb, but the idea that my feed would just be a chronological list of what all my fb friends (or, those who I haven’t bothered to block posts from) is not remotely enticing. I want them to do a better job of guessing which posts will interest me, not a worse job.

      Like

      1. For a bit of (recent) historical perspective, the news feed used to be precisely a list of chronological updates. The transition was bitterly fought by some, and for quite some time there was an option to display posts chronologically (which is now gone). But I understand that perspective.

        Like

      2. I think most people are like gradstudentbyday. The problem is we all have different definitions of what is “newsworthy,” so we want Facebook to highlight different things on our feed. Balancing the personal and the professional is really tough. Of course, Facebook could make it a lot easier for us to decide for ourselves. But this would give up most of the power that Facebook has.

        Even if Facebook’s ability to directly influence people by changing our opinions on various issues is limited (we aren’t mindless drones), Facebook’s news feed has tremendous power to potentially influence which topics we think are important. See my much longer post along these lines: http://scienceofnews.wordpress.com/2014/07/04/overstating-and-understating-the-influence-of-facebook/

        Like

  2. In the background here is the idea that large corporations benefit from harming their consumers. A thoughtful conversation about the issue here would at least submit that it’s possible Facebook isn’t run by nefarious people with no ethics, and that even if it were, lying to and manipulating people isn’t necessarily profitable.

    Like

    1. Don’t you think they need to manipulate us in ways that are not really in our interest to get us to keep clicking? At least, if I were them, I would seriously explore how to make the site as addictive as possible, which is not necessarily the same as fun or rewarding.

      Like

      1. I think people were already pretty addicted to sharing their lives with each other, and I think it is in people’s interests to be served with free tools to express themselves in increasingly dynamic ways to one another through social media.

        The debate over whether social media is turning us into unthinking, manipulated cattle is taking place ON SOCIAL MEDIA. That the debate is happening here gives the lie to the idea that ones like it will soon disappear in favor of Coke ads that are programmed to fire at harmonic resonance with my hypathalamus and force me to give up all my political agency.

        Like

  3. Dan, I’m glad to see you take this conversation in this direction. I think we need the bridge from the questions of “Does the study violate IRB or human subject protections” more toward the question of “What should human subject protections in the circumstances be.” I disagree that questions about whether this study actually violates IRB protocol is a red herring. Figuring out this question is where the everyday, ethical discussion around these issues is happening right now for academics. But, and this is why I like your turn, I don’t think we can really answer these questions fully until we re-evaluate just the relationship you point to – the nature of the control/influence over individual experience (whether corporate or governmental) and the ethics we should erect within it.

    Like

  4. I would find it helpful to distinguish between studies of social media and “things on the Internet” that attempt to manipulate behavior/opinions of others and those that observe and seek to find patterns in what is out there. My gripe is that the latter [observing and categorizing what is there] is confounded with the former on protection of privacy grounds.

    “If FB is going to purport to provide me ‘news’ – even if it’s just news about my family and friends – I want FB to make some attempt to guarantee that such news is not intentionally manipulated to present me a biased view (of a product, an event, or even the aggregate emotional state of my network). I doubt FB is interested in issuing such a guarantee, and for the moment they don’t have to.”

    The whole point of any commercial enterprise in general and advertising specifically is to manipulate opinions to make them more favorable toward buying the product and/or consuming the product so that advertisers will pay for space in it. There has been a lot of research on what does and does not get into newspapers. Trust me, newspapers are not and never have been unbiased video cams recording everything that passes. Seems to me that academic studies of what FB and other corporations do with all the information they collect is one of the important things to study. Not WHETHER they “bias” the feed you see, but HOW their algorithms do the selections.

    One could design experiments ON FB etc where one creates a bunch of fictitious beings who behave in controlled ways to asses the impact of variations in users behavior or purported characteristics on how the news feed works. Would our IRBs consider this an unfair invasion of FB’s privacy? My guess is that the SCOTUS would.

    Liked by 1 person

    1. “There has been a lot of research on what does and does not get into newspapers. Trust me, newspapers are not and never have been unbiased video cams recording everything that passes. Seems to me that academic studies of what FB and other corporations do with all the information they collect is one of the important things to study. Not WHETHER they “bias” the feed you see, but HOW their algorithms do the selections.”

      I agree with this completely. I may have sounded rosier about traditional media than I intended – my point was not that newspapers were unbiased (“if it bleeds, it leads”; “all the news that fits”, etc.), but that there were some institutional arrangements and professional standards that had been established to try to deal with the tensions between commercial pressures and some notion of objective reporting (impossible as that ideal is). A conversation I would love to see is what similar sorts of institutional arrangements would make sense for newer forms of media. Something a bit like this is the recent implementation of the right to be forgotten. (Not that I think this particular rule is going to be a boon; I’m utterly uncertain, but it seems like the right sort of discussion at least.)

      In a similar vein, Jenny Davis over at Cyborgology has an excellent post about the “structural affordances” of FB and how they push for certain kinds of content to be shared (somewhat independent of the algorithms that govern what content gets displayed to a given user): “the structural affordances of Facebook are such users are far more likely to post positive content anyway. For instance, there is no dislike button, and emoticons are the primary means of visually expressing emotion. Concretely, when someone posts something sad, there is no canned way to respond, nor an adequate visual representation. Nobody wants to “Like” the death of someone’s grandmother, and a Frownie-Face emoticon seems decidedly out of place.”

      Recommended.

      Like

  5. olderwoman’s right on, and this is where the tension in the debate converges: to what degree is it cool for “objective scientists” to manipulate/persuade people, and to what degree is it cool for news/communications businesspeople to manipulate/persuade people. Neither us nor Mark Zuckerberg nor anyone ever can get away with *not* selecting, filtering, manipulating, persuading.

    I have resolved thusly that we should not pretend we can: that we should just be transparent about our own interests and what we’d like to persuade people of. People, shockingly, filter and digest and deliberate at least to *some* degree on the content they engage with. Sociologists worry about the differential power of the person doing the persuading. Fair play.

    But if Big Powerful Corporations have the power to persuade people unfairly, then it follows that Big Powerful Academics At The World’s Leading Universities do as well. So the argument reduces to who one thinks knows people’s interests better. Why don’t we let people decide for themselves?

    Who cares if Amazon and Facebook learn about you in order to stop annoying you with ads that don’t appeal to you? The only thing to be worried about here is the government getting ahold of our information. That’s why we have a fourth amendment, which is being trampled while people focus their ire at private industry who want you to download mp3s.

    Liked by 1 person

  6. I guess part of my lack of outrage is having done some research in academic marketing research. To name just a few analogous examples of manipulation in other studies published in academic journals grocery stores track customer movement through stores based on RFID chips in shopping carts, whether in-store coupons are effective means of price discrimination, and whether certain smells lead you to purchase more. Each of these studies is designed to improve sales. The last two explicitly mention the manipulation of people’s emotions as a reason that in-store coupons and store smells increase purchases (and the latter was published in 1996).

    What is new is the degree that Facebook is pervasive in our lives and it might call for some Peter Parker scrutiny, with great power comes great responsibility. But I think that the knowledge that Facebook is not a public realm and is motivated by profits is now pervasive enough, especially among adults, that we should acknowledge what we concede to use this vastly popular and useful product. Absent large-scale efforts to effectively organize to change Facebook’s business practices I don’t find the expressions of outrage either well-placed or very meaningful.

    And the fact still stands that they did all of this for the benefit of an effectively null result.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s