the correlation-causation two-step, police shootings edition

Reducing fatal shootings by police officers is an incredibly important policy problem. Researchers have been stymied in studying the problem due to poor data, and limited resources for studying gun violence in general. The Monkey Cage today has a new post on a useful study that advances our understanding of police shootings. The authors, Jennings and Rubado, show in their research that police departments with policies that mandate reports by officers whenever an officer points their weapon at someone – not just when they fire a weapon – have significantly lower rates of shootings.

The interpretation of this important finding runs into a classic and immensely frustrating tension in social science research. Is this correlation or causation? And here the authors want to have it both ways, from the title of the piece on down. In so doing, they engage in what I have come to call the “correlation-causation two-step,” a ritual social science dance where results are described in clearly causal terms in one breath and then immediately disclaimed as mere correlations in the next.

Take this key section:

Almost all departments require officers to report on every time a firearm is discharged – but as of 2013, only about 46 percent of agencies required officers to report and document every time they draw their firearms.

And that policy substantially reduces civilian deaths. Agencies that require officers to report every time they draw their weapons have significantly lower rates of fatal shootings by police, as we reveal in a Public Administration Review article on our research.

At least 40 fewer people would have died in officer-involved shootings between 2000 and 2015, we estimate, if the 10 police departments in our study that had the highest death rates had required officers to report every time they drew a weapon.

While we found that the requirement for reports after gun draws were associated with lower death rates, we should note that this association does not demonstrate a causal relationship. We would need more precise data than we have to show that the policy actually causes a reduction in death rates.

Track the claims. I’ve added emphasis to the successive steps in the dance. The section begins with a descriptive claim, neither correlational nor causal (46% of agencies have stronger reporting requirements). It then proceeds to its first causal claim (“that policy substantially reduces civilian deaths”). Next, back to correlational (“Agencies that require… have significantly lower rates…”). Then causal (“At least 40 fewer people would have died… if the 10 police departments in our study that had the highest death rates had required officers to report…”. Then, masterfully, the post explicitly states that the paper could not go beyond correlation (“this association does not demonstrate a causal relationship”) due to an absence of precise data. But this paragraph is buried after repeated causal assertions, starting with the very title of the post: “Want to reduce fatal police shootings? This policy makes a big difference.” That’s about as clear a causal claim as one could make. (And I know that authors may not have complete control over the title of their pieces, but in this case the title’s causal claim is reiterated multiple times as in the section above).

This dance is far too common, especially when social scientists try to comment on hot button public issues with inadequate data (for an old example with the same structure, see Regnerus on pornography and same-sex marriage attitudes). But we must be more careful and more honest in what we’re claiming throughout our arguments. A small disclaimer isn’t sufficient when the headline screams causation.

How do you solve a problem like the correlation-causation two-step? I’m not sure, but recognizing and naming it is a start.* Beyond that, I think it would help if we had a more sustained methodological discussion about quantitative description as a valuable and difficult task in its own right, one that is not simply a preliminary act in a causal argument. Quantitative description is important, powerful, and misunderstood. But I doubt that will be sufficient in a world hungry for answers and happy to read excess certainty into social science claims (at least some of the time).

 

* Is there a better name already out there for this phenomenon? It seems like something that would be in Gelman’s lexicon.

Author: Dan Hirschman

I am a sociologist interested in the use of numbers in organizations, markets, and policy. For more info, see here.

10 thoughts on “the correlation-causation two-step, police shootings edition”

  1. Yes, good post. I think part of the problem is that avoiding the causal language may require less vivid-sounding language and lots of equivocation. So let’s try with the example in the post. Departments that required officers to report every time their guns were drawn had lower rates of officers fatally shooting civilians. It is possible that this is because knowing you will have to write a report reduces the likelihood officers draw their guns, but it could also be true that the kinds of departments that shoot fewer people are the same kinds of departments that mandate reports about drawing guns. A longitudinal study could show whether a change in policy was associated in a reduction in shootings within an agency, although there would still be the problem of “common causes” in which the same thing may have led to both fewer shootings and mandated reports. So we do not know for sure that simply making people write more reports will reduce shootings, but the mechanism of requiring officers to be accountable for drawing their weapons certainly seems plausible as contributing to the kind of shift in organizational culture that would reduce the rate at which officers shoot civilians.

    Liked by 1 person

    1. Fantastic! That’s not too equivocal or boring, and it also has the virtue of being precise and laying out next steps for the research and policy agenda by specifying *how* the correlation might help identify causation, rather than just providing a blanket disclaimer.

      Like

  2. I will give the authors a break on the headline as headlines are often written by someone other than the author, often without checking with the author first (in fact, in some publications, a different editor writes the headline than edited the piece). I hold them responsible for everything else, though.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s