Today, three researchers at Facebook released a new study in Science titled “Exposure to ideologically diverse news and opinion on Facebook.” The authors summarize their own findings in a companion blog post:
We found that people have friends who claim an opposing political ideology, and that the content in peoples’ News Feeds reflect those diverse views. While News Feed surfaces content that is slightly more aligned with an individual’s own ideology (based on that person’s actions on Facebook), who they friend and what content they click on are more consequential than the News Feed ranking in terms of how much diverse content they encounter.
As several commentators have noted, this framing is a little weird.
Christian Sandvig argues that the results are written up carefully to exculpate Facebook from exactly the charge that the research supports: that Facebook’s algorithm polarizes what news users see. That users choose to preferentially click on ideologically aligned stories is besides the point. The study shows that Facebook’s algorithm “removes hard news from diverse sources that you are less likely to agree with but it does not remove the hard news that you are likely to agree with.”
Zeynep Tufekci and Eszter Hargittai offer related criticism, focusing on significant methodological issues that are buried in the Science piece. Tufekci, for example, argues that the most important finding is actually buried in an appendix: a confirmation that placement in the News Feed has a huge effect on the likelihood that users will click on a story, meaning that Facebook has its disposal the power to drive clicks towards or away from different stories. This finding isn’t unprecedented, but it’s important to see it confirmed by Facebook’s own in-house researchers.
All three pieces note that the sample used in the study consists of just two thirds of half of 4% of users – those who use Facebook more than 4 day per week, and whose ideology could be coded from self-reported ideological variables in Facebook profiles. Given the limitations of the sample, Hargittai argues that the authors’ claim to “conclusively establish that on average in the context of Facebook, individual choices (2, 13, 15, 17) more than algorithms (3, 9) limit exposure to attitude-challenging content” is unsupportable.
But again, as Sandvig notes, this comparison is weird in the first place:
The tobacco industry might once have funded a study that says that smoking is less dangerous than coal mining, but here we have a study about coal miners smoking. Probably while they are in the coal mine. What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.
Read the study, read the commentaries, and let us know what you think in the comments!
EDIT: Nathan Jurgenson has useful reflections as well, focusing on the impossibility of a neutral filtering algorithm:
Facebook orders and ranks news information, which is doing the work of journalism, but they refuse to acknowledge they are doing the work of journalism. Facebook cannot take its own role in news seriously, and they cannot take journalism itself seriously, if they are unwilling to admit the degree to which they shape how news appears on the site. The most dangerous journalism is journalism that doesn’t see itself as such.
To my mind, the “algorithm” vs. “users’ choice” framing of the study reminds me of debates about the role of genetics vs. the environment, or nature vs. nurture. As in those debates, the simple version of the framing is highly misleading. As Sandvig puts it: “algorithm and user form a coupled system of at least two feedback loops.”