Reducing fatal shootings by police officers is an incredibly important policy problem. Researchers have been stymied in studying the problem due to poor data, and limited resources for studying gun violence in general. The Monkey Cage today has a new post on a useful study that advances our understanding of police shootings. The authors, Jennings and Rubado, show in their research that police departments with policies that mandate reports by officers whenever an officer points their weapon at someone – not just when they fire a weapon – have significantly lower rates of shootings.
The interpretation of this important finding runs into a classic and immensely frustrating tension in social science research. Is this correlation or causation? And here the authors want to have it both ways, from the title of the piece on down. In so doing, they engage in what I have come to call the “correlation-causation two-step,” a ritual social science dance where results are described in clearly causal terms in one breath and then immediately disclaimed as mere correlations in the next.
Take this key section:
Almost all departments require officers to report on every time a firearm is discharged – but as of 2013, only about 46 percent of agencies required officers to report and document every time they draw their firearms.
And that policy substantially reduces civilian deaths. Agencies that require officers to report every time they draw their weapons have significantly lower rates of fatal shootings by police, as we reveal in a Public Administration Review article on our research.
At least 40 fewer people would have died in officer-involved shootings between 2000 and 2015, we estimate, if the 10 police departments in our study that had the highest death rates had required officers to report every time they drew a weapon.
While we found that the requirement for reports after gun draws were associated with lower death rates, we should note that this association does not demonstrate a causal relationship. We would need more precise data than we have to show that the policy actually causes a reduction in death rates.
Track the claims. I’ve added emphasis to the successive steps in the dance. The section begins with a descriptive claim, neither correlational nor causal (46% of agencies have stronger reporting requirements). It then proceeds to its first causal claim (“that policy substantially reduces civilian deaths”). Next, back to correlational (“Agencies that require… have significantly lower rates…”). Then causal (“At least 40 fewer people would have died… if the 10 police departments in our study that had the highest death rates had required officers to report…”. Then, masterfully, the post explicitly states that the paper could not go beyond correlation (“this association does not demonstrate a causal relationship”) due to an absence of precise data. But this paragraph is buried after repeated causal assertions, starting with the very title of the post: “Want to reduce fatal police shootings? This policy makes a big difference.” That’s about as clear a causal claim as one could make. (And I know that authors may not have complete control over the title of their pieces, but in this case the title’s causal claim is reiterated multiple times as in the section above).
This dance is far too common, especially when social scientists try to comment on hot button public issues with inadequate data (for an old example with the same structure, see Regnerus on pornography and same-sex marriage attitudes). But we must be more careful and more honest in what we’re claiming throughout our arguments. A small disclaimer isn’t sufficient when the headline screams causation.
How do you solve a problem like the correlation-causation two-step? I’m not sure, but recognizing and naming it is a start.* Beyond that, I think it would help if we had a more sustained methodological discussion about quantitative description as a valuable and difficult task in its own right, one that is not simply a preliminary act in a causal argument. Quantitative description is important, powerful, and misunderstood. But I doubt that will be sufficient in a world hungry for answers and happy to read excess certainty into social science claims (at least some of the time).
* Is there a better name already out there for this phenomenon? It seems like something that would be in Gelman’s lexicon.