OK Cupid’s excellent blog just posted the results of a set of experiments they conducted on their own users. The post is framed in explicit defense of similar practices at Facebook:
We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.
In this post, I want to engage with the above argument in the context of OKC’s own manipulation.
In regards to the “everybody’s doing it” defense, I think Zeynep Tufekci remains essential. Yes, many large websites conduct routine experiments on their users for various ends, from improving sales or matches, to getting out the vote. Zeynep refers to this trend as “engineering the public”. Zeynep rightly argues that it’s precisely because these methods are new, pervasive, and subtle that academics ought to be criticizing them and advocating for some kind of ethical standards – independent of whether these internal tests are ever published:
To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!
So, yes, OKC is right that everyone’s doing it – and that’s precisely why we ought to do something about it now, before it gets so utterly taken-for-granted that there’s little hope of developing any kind of protocols governing transparency or accountability.
Now onto the substance of OKC’s particular manipulations. I think the third experiment is the most interesting. Here’s how OKC explained it:
The ultimate question at OkCupid is, does this thing even work? By all our internal measures, the “match percentage” we calculate for users is very good at predicting relationships. It correlates with message success, conversation length, whether people actually exchange contact information, and so on. But in the back of our minds, there’s always been the possibility: maybe it works just because we tell people it does. Maybe people just like each other because they think they’re supposed to? Like how Jay-Z still sells albums?
To test this, we took pairs of bad matches (actual 30% match) and told them they were exceptionally good for each other (displaying a 90% match.)* Not surprisingly, the users sent more first messages when we said they were compatible. After all, that’s what the site teaches you to do.
On the side of this text is a footnote, clarifying: “* Once the experiment was concluded, the users were notified of the correct match percentage.” I applaud OKC for notifying its users about their participation in the experiment. But, as the debate over Facebook showed, there is absolutely no requirement that companies like OKC and FB notify users when they manipulate the algorithms that guide their behavior. We need public debate about these manipulations precisely to institutionalize this kind of practice, at a minimum.
Beyond that, I think the language here is especially telling, as OKC understands how it shapes behavior: “users sent more first messages when we said they were compatible. After all, that’s what the site teaches you to do.” Sites like FB and OKC train us to see the world through the algorithm, through the newsfeed, and to behave accordingly. Then, they tweak those algorithms behind the scenes for various purposes: to increase time spent on the site, improve click through rates on ads, or even to manipulate emotions for science (!). Are we really just supposed to be ok with this? Much as FB has a massive network lock-in effect from the size of its user-base, such that opting out of FB comes with real costs to an individual’s social life, a handful of dating sites (eHarmony, OKC, match.com, ?) have massive user-bases of potential dates. Today’s internet is not (primarily) a free-for-all competitive market for attention with lots of small producers and individual consumers, it’s a handful of massive sites that serve as the platforms for the vast majority of interactions. OKC is better than most at revealing its own operations, and we should applaud them for that. But we still need to hold them, and everyone else, to some set of ethical standards beyond “manipulation is ok, because it’s pervasive.”
EDIT: There’s more coverage over at Vox and Kottke. Vox usefully notes that the first experiment was decidedly less troubling in part because everyone was notified in advance (it was part of a clear promotional campaign). Kottke theorizes about why the OKC experiments haven’t yet drawn as much outrage as FB, in spite of the fact that “Facebook may have hid stuff from you, but OK Cupid might have actually lied to you.” I like these explanations in particular:
3. We trust Facebook in a way we don’t trust OKC. Facebook is the safe baby internet, with our real friends and family sending us real messages. OKC is more internet than the internet, with creeps and jerks and catfishers with phony avatars. So Facebook messing with us feels like a bigger betrayal.
4. OKC’s matching algorithm may be at least as opaque as Facebook’s news feed, but it’s clearer about being an algorithm. Reportedly, 62 percent of Facebook users weren’t aware that Facebook’s news feed was filtered by an algorithm at all.