Timothy Carney wrote an article earlier this week decrying what he calls the “rampant abuse of data” by pollsters and the press this election season. He faults North Carolina’s hometown polling company, Public Policy Polling (PPP), among others, for asking “dumb polling questions” such as the popularity of the erstwhile Cincinnati Zoo gorilla Harambe; support for the Emancipation Proclamation; and support for bombing Agrabah, the fictional country in which the Disney film Aladdin is set.
While I agree with Carney that many of the interpretations of these questions are very problematic (and I should note that I have used PPP many times to field polls for my own research), I think he’s wrong that these are dumb questions and that the answers therefore do not constitute “data.” Quite the opposite: asking vague and difficult-to-answer questions is an important technique for assaying culture and, thereby, revealing contours of public opinion that cannot be observed using conventional polling.
My argument rests on two related assertions: that we live in an increasingly fragmented public, and that the fragments of the public (or the sub-publics, if you prefer) are culturally bound. I am not claiming that this fragmentation is polarizing, or that Americans are further divided than they’ve ever been before; only that publics are fragmented by their statuses as distinct media audiences and their relative unlikelihood of encountering or paying attention to ideas or arguments with which they disagree.
That these publics are culturally bound means that they share mental representations-styles, skills, habits, and beliefs-that go deeper and are more automatic than overt political preferences or policy stances.
Understanding these cultural subpublics, then, requires techniques that disturb the automaticity of poll-answering (plunge respondents into unsettled times, if you prefer). These are emphatically not methods of extracting pre-existing, pre-formed attitudes out of respondents’ heads: a theory of public opinion that has been defunct since at least Zaller’s 1992 book but nevertheless continues to provide the common-sense language pollsters, aggregators, and Carney all use to describe polling. Rather, they are methods of understanding the cultural interconnections among meanings that are latent in respondents’ publics.
Carney, for example, takes the media to task for misrepresenting a February YouGov/Economist poll that found nearly 20% of South Carolina Trump supporters opposed the Emancipation Proclamation, while only 5% of Rubio supporters did (NYT Upshot article here). That study and its interpretation have been critiqued elsewhere too (e.g., Snopes), and Carney is correct to say that this is not evidence that Trump’s supporters actually oppose the end of slavery. Some interpretations have been that the question is “actually” about presidential executive orders, which are unpopular with conservatives right now and with anti-establishment conservatives in particular; Carney prefers the interpretation that “it was a trick question, and less educated white respondents were more likely to be confused by it than were more educated white respondents.”
Both of these interpretations are valid and plausible. But neither of them invalidates the asking of the question. We learned something about the latent mental representations of South Carolina voters by forcing them to answer questions they weren’t expecting: this time about a series of executive orders. The poll asked about several executive orders over history; I’ve done a quick graph here dividing self-identified liberals, conservatives, and moderates on each of them:
The presence of significant variation — here by political identification, but also in the original by presidential primary candidate choice — suggests that there are underlying political-cultural differences in how respondents approach the relatively novel question of executive orders.
Another example: for several years, when I’ve fielded polls (using PPP), I’ve included a frustrating closed-choice question: “If you had to choose, which of the following groups do you like the least: Nazis, the Ku Klux Klan, atheists, communists, or gay rights activists?” In various polls, the order of the responses varies. This question elicits lots of complaints from respondents, who rightly consider it unfair. But its very unfairness makes it useful in identifying otherwise-obscured patterns. Here is the combined result of that question, stratified by attitude (positive, negative, or neutral) toward the Tea Party, among North Carolina and Tennessee registered voters:
Respondents who were favorable toward the Tea Party (the red bars) showed little variation among the groups; they disliked Nazis the most (25%) but the difference between that and the least-disliked (Communists, at 16%) was very small compared to the difference between the most-disliked group among those with negative views of the Tea Party (the Ku Klux Klan, at 49%) and the least-disliked (gay rights activists, at 5.4%). (Disclaimer: these are raw data; take them as indicative but not solid.)
The point is not that Tea Party supporters think gay rights activists and Nazis are about the same; such an interpretation would be wildly inappropriate for various reasons, not the least of which is that we didn’t give them time to “think” at all! This is a low-information, low-attention, low-stakes assessment, which means the only appropriate interpretation is to examine the differences between types of respondents. Further, like all poll responses, we should understand them as fully contextual: the aggregated result of many isolated, contrived situations created through the technical intervention of the poll. But these are not dumb questions. They reveal and document political-cultural patterns that simply wouldn’t be available otherwise.
Finally, Carney’s distinction between these and “data” reveals a theoretical misunderstanding of conventional polls as well. These, too, are cultural assays, subject to the same kinds of unknown contextual input that the other questions are. These data do not exist outside their social contexts, nor absent their theoretical interpretations.
In both of these cases (the “dumb” questions and the “real data”), the imperative is to interpret the patterns subtly, thoughtfully, and fairly — not to dismiss the whole practice because media and partisan accounts are simplistic.