There’s a lot of social science triumphalism about the accuracy of Nate Silver’s predictions in the election. I’m certainly happy. But, does sociology as a discipline deserve to be gloating? From where I’m sitting, Nate Silver contradicts at least a couple things many sociology methods teachers have been telling their students for a long time.
1. Silver’s projections were based on a meta-analysis of state polls that did not come close to what many sociologists have regarded as basic standards for publishable data. It’s not like Silver did anything (too) magical with his own analysis: reweighted state polls proved to be quite accurate. Sociologists have imagined that all kinds of inferential problems arise from nonresponse that cannot be resolved by reweighting. In this view, Silver’s analysis should have been garbage in, garbage out.
To see what I mean, let’s look at something one sociologist said this summer on this very issue (but in a very different political context):
“Half-ass data like that from Pew may be novel enough on some questions to justify publication in a third tier journal, but you can forget sending shit like that to top 10 journals if I have anything to say about it. And, even “high quality” BS data like Pew are completely unacceptable when the question makes inferences about population characteristics—Using Pew or Gallup to say how many people identify with a religious group, for example, or what proportion of the population supports same sex marriage. Who fucking cares what the Pew data show? Not me. And the Pew data are now considered excellent! Back when I was an undergraduate, Doug Eckberg would have failed me in Survey Research Methods if I only garnered a 20% response rate for my interviews…”
To be clear: estimating the percentage of people who are going to vote for Obama is precisely “making inferences about population characteristics.” Indeed, it’s a more difficult task than estimating the proportion of the population who supports gay marriage, because it isn’t just a question of estimating population sentiment, but laying that over a model of what people in the population are actually going to turn out and vote. The position “Who fucking cares what the Pew data show?” because of low response rates implies the pre-election position “Who fucking cares what Nate Silver says?”
Granted, the idea that response rates ought to be important is both intuitive and plausible. And yet it still requires evidence. The surprise is how, despite all the increasing impediments to getting people to participate in surveys, brief-field-period-low-response-rate-with-reweighting polling still apparently works.
Sociology methods teachers have long provided various rules of thumb about how surveys ought to have an X% response rate, or else they are crap. Frankly, we need to shut up about this and reconsider what the current evidence says.
2. Many sociologists have been down on “objectivity” for a long time. The idea is that one cannot separate one’s values from one’s analysis, and even supposing that one might be able to do so is folly. Silver has said he votes mostly Democratic and that he supports Obama, but yet also that these beliefs do not influence his forecasting. Critics said that his forecasts were obviously being influenced by (or being devised in the service of) his politics. Seems to me that, from a consistency standpoint, many sociologists should have been congenial to the notion that these critics were right.
Instead, the error in Silver’s forecast was largely in not being bullish enough about the Democrats’ prospects. What will be an interesting test of Silver’s commitment to scientific principles — and those of all of us who are lauding him right now — is what happens when Silver is running numbers for an election that is not going Democrats’ way. Will his analyses be overoptimistic to avoid alienating his current fans? If not, will his current fans bemoan how he appears to have turned traitor?
I think Nate Silver’s advantage over most survey-based sociological research is that he is aiming for prediction of a quite simple outcome, not understanding, and he has years of reasonably similar data to compare with. In other words, state polls may have gotten a little bit worse since 2008 or 2004, but not much, and we have a very good sense of how well state polls predicted outcomes in those elections. So our collective confidence in the data comes less from its epistemic value (high response rates, collected by a non-partisan academic agency, appropriate question wording, etc.) and more from a pragmatist reading that the same or similar polling firms used the same or similar methods in similar recent elections, and the data more or less mapped onto the outcome of interest. Plus, Silver has the advantage of being able to discount relatively bad polling firms or outlier polls – something we usually don’t have, since most survey research relies on one or a small-set of surveys. Sociological methods may not be well-suited to predicting elections with election polls, but that doesn’t necessarily mean we should change our standards for the questions we do ask (though it doesn’t mean we shouldn’t, either).
Also, one thought in terms of objectivity – there were a lot of forces holding Nate Silver in check. There were prediction markets, competing polling models, and simple polling averages or medians. So, assuming Silver wanted to produce as objective a forecast as possible, he had lots of tools at his disposal to compare his output to. And, conversely, if he wanted to skew things a bit, his room for maneuver was quite small. (And that’s leaving aside the debates about what objectivity is, and whether or not it’s just one thing or various sorts of things in different times and places, cf. “strong” vs. “weal”, or “mechanical” vs. “disciplinary” and so on.)
LikeLike
As you said, Silver’s real advantage came from pooling multiple small samples and using meta-analysis (and simulations) to create a more reliable prediction. If sociologists had multiple Pew-type samples that were taken at roughly the same time and that were repeated often enough to compare errors over time, we might be able to make more confident estimations about the predictability of our results. The problem most sociologists face in generalizing is that we try to make broad claims about very small samples that will never be replicated again.
And I would think that Silver’s objectivity has already been put to the test. He was accurate in predicting the 2010 Congressional election slaughter.
LikeLike
Fair enough, but the traditional wisdom about nonresponse bias is that it cannot be fixed by increased sampling size, whether in a single sample or multiple samples that all have bad response rates. I don’t think it’s unreasonable to imagine that a world with < 10% (or, often < 5%, or, even < 3%) response rates is one where the results are all over the place and unable to give leverage to what will actually happen in the election. And yet, evidence is that we do not live in that world.
The advantages that Silver has are well put, but he also faces a bigger challenge than we do in that he is trying for a much more precise estimate of a population parameter than what most of our applications require. I mean, take the question of what % of Americans support gay marriage. Is there really any substantive import to the second significant digit of that answer? Whereas with polling, if the aggregation of polls was systematically off in some unknown direction by 5 percentage points on top of MoE, the value of it would be highly diminished.
Brayden — Good point about Silver being right about the bad 2010 outcome. He's so much more of a star now, though, I think. As you know, there is this idea — again, a perfectly reasonable hypothesis — that when you have a project that involves a large number of small methodological decisions, you can't help but have your politics and ego and audience concerns whatever else mess with how you make those decisions. I've always suspected that this argument is especially popular among people who aren't much interested in actually trying themselves.
That said, I also wouldn't be surprised if Silver cashes out pretty soon. Given his past as a poker player, he has to be unsettled by the fact that he makes probabilistic predictions that the public turns into deterministic ones, and so really he's just a bad beat away from suddenly being widely regarded as an idiot whose methods no longer work.
LikeLike
There’s also non-independence of the prediction and voting for Silver. It’s not totally inconceivable to me that widely publicized poll predictions themselves influence outcomes. Call it what you want — performativity? — but I would be curious about work looking at how knowledge of polls makes particular behaviors more likely. In a case with highly public poll predictions, they could help produce their own predictions.
LikeLike
Jeremy is raising an important topic: given our priors about survey research, how can such terrible samples yield — even in meta-analyses — good results? The first time I confronted this was in reading the appendix to Putnam’s Bowling Alone. Much of his data are based effectively on a RR of under 5%. Yet he shows results that line up well with the GSS. No one I’ve asked to explain this to me has really done so.
LikeLike
I was inspired to look up nonresponse and survey bias. There’s some interesting work in Public Opinion Quarterly on this. One that does a meta-analysis of the impact of nonresponse rates on nonresponse bias:
http://poq.oxfordjournals.org/content/72/2/167.short
It finds that you certainly can observe bias, but response rate in and of itself is not generally predictive of bias. Design elements are crucial.
Another more recent piece builds upon this work.
http://poq.oxfordjournals.org/content/early/2012/09/11/poq.nfs032.full.pdf+html
It’s highly speculative, but suggests a series of alternative indicators to detect bias rather than just relying upon response rate as a proxy for survey quality.
LikeLike
A couple of year ago I saw a provocative talk by Jon Krosnick of Stanford political science which argued the conventional wisdom (in sociology and other disciplines like PS too) that low response rates necessarily mean a problem for surveys is totally wrong. His evidence was based on comparisons of surveys with different response rates to known population targets. He puts more emphasis on representativeness of the demographics of the sample of the survey and the method of data collection (probability sampling being better). For instance see this paper and citations in it: http://comm.stanford.edu/faculty/krosnick/docs/2009/2009_poq_chang.pdf
I changed how I taught on this subject after this talk. I would be curious if there is other work on this topic.
LikeLike
(Right. I should be clear that Krosnick’s work has likewise been influential for my own thinking about these issues. Note that if the issue is the representativeness on the sample of demographics; that’s of course diagnostic but also in principle can be addressed by reweighting if one has a population gold-standard distribution. What I’m particularly interested in is that idea of what kind of lurking differences there are after demographic imbalances are taken into account, and whether available evidence from polls and other public opinion data suggests not much.)
LikeLike
I’m a little stumped by the contention that Silver is simply countering the problems of non-response with increased sample size (or even number of samples). That’s not it at all. He’s increased the *diversity* of sample approaches. Yes that also happens to coincide with size and number of samples. But the strength of his model isn’t aggregating over “multiple Pews” but over multiple Pews + Rasmussens + etc.s, which each have *different* sampling biases. This is the point in Scott Page’s work on model aggregation, complexity and performance. And, perhaps more importantly to the discussion at hand, it’s not at all inconsistent with the critique of small sample sizes *in one shot samples*. Or have i completely missed your point, Jeremy?
LikeLike
1. Agreed re: increased sample size. As we all know, increasing sample size decreases random error but does nothing to improve problems caused by systematic error. Nonresponse bias is systematic error.
2. Agreed re: multiple samples per se. No particular reason why “multiple Pews” would be better than “one giant Pew.”
3. The issue is what kind of leverage Pew + Rasmussen + whatever should have against non-response. It’s one thing to adjust for “house effects”, and its another to be able to make an assumption about what the relationship between the average “house effect” over all sampling outfits and the true parameter. My understanding is that Silver’s model assumed the average house effect was zero and that worked. Even if he had something better, he still had something he went with that worked.
*All* these polls have crap response rates by standards of sociology textbook advice. The vast majority of sampled people were either not available or not willing to participate. The traditional argument for dismissing low response rate surveys is that the people who participate are weird in unknown ways from the population as a whole. A subproposition of the traditional argument is that they are weird in ways that are distinct from easily measured characteristics, as otherwise you could reweight your way around the problem.
If only 5% of sampled folks did your survey, the logic goes, there must be something unusual about them. You certainly can’t extrapolate findings from those 5% to everyone.
And yet, you can. Various hypotheticals like “Maybe this year Republicans are too busy creating jobs to participate in polls,” if true and actually substantial, would screw up all the survey houses and thus Silver’s estimates. Silver himself fretted about such possibilities. But, lo, none came to pass.
LikeLike
I guess the question I’m in part posing is how much we should assume the “5%” in each sample should be considered to reflect the same systematic biases. And given the variety amongst their results, it would appear they reflected quite different (unobserved but) systematic biases. So, no, you can’t extrapolate from those 5% to everyone, but perhaps you can across the aggregation of them, if the many 5%s start to approximate something more representative – or perhaps more accurately, with a more full coverage of potentially influential systematic biases amongst them. So what’s happening is dampening any one (or small set of) those biases’ influence on the predictive model. Right?
This leads to another reading i’ve been leaning towards of the whole Silver phenomenon. Not necessarily that it says a ton for us as social scientists about the sample size question. But that maybe it does give us an even stronger kick in the backside towards replication, and meta analyses across replications. But perhaps i’m too optimistic to think we’ll heed that call.
LikeLike
Now consider this — my colleagues at Pew just released a very interesting report evaluating public opinion data collected via Google Consumer Surveys: http://www.people-press.org/2012/11/07/a-comparison-of-results-from-surveys-by-the-pew-research-center-and-google-consumer-surveys/
LikeLike
Jeremy- I almost completely agree. I do, however, have one quibble: election polls generalize to the population of voters not the population as a whole. Voters, sociologically speaking, tend to be a relatively easy group of people to contact. The problems alleviated by the fact that they are easy to reach could easily be outweighed by the fact that you must measure and weigh three things in election polls: candidate support, intention to vote, and probability of voting (as you mention).
Again, I almost entirely agree, but I think that the population to which you wish to generalize is an important question that could affect the validity of election polls compared to standard sociological research.
LikeLike
Fair point. A minor counterpoint would be to wonder if a large part of the people in the “nonvoter”/”tough to survey” set are already lost by the time one gets to the textbook 70% response rate, as opposed to the folks one loses in the freefall to 5%.
LikeLike
Oh please, let’s not get all hot about this shit just because Obama won big, as anyone should have known. The assumptions used to make Silver’s estimates are based on the last election cycle’s population parameters. How did those areas go last election cycle? Well, once you start with that, the bullshit polls are simply noise, or are they signals, eh? Nah, they’re noise, right?! Bayesian. That’s statistician speak for “I pulled this out of my ass, but it’s better than what you can come up with for now.”
Pimp that shit data all you want, this election and Sliver’s projections are not support for your preferred interpretation of how non-random polls might add up to estimates of population parameters.
LikeLike
Spout profanities all you want. The brutal fact is that pre-election state poll averages (in your words, “shit data”) predicted actual election results substantially better than the 2008 results.
Or take the national popular vote. 2008 population parameter: Obama +7. Shit data average from this Monday: Obama +1.6. Actual result: Obama +2.3.
It’s completely loony to believe polls are “bullshit” that add no value to preceding election results. It’s like being a methodological birther. But, to your credit, it’s a lunacy consistent with a lot of commonplace sociological thinking about low response rates and/or nonprobability sampling.
LikeLike
How’s that non-probability survey project going? If you can convince lots of people that everything we know about probability theory is wrong, maybe you can get it in ASR!
LikeLike
First, there is no such thing as a “non-probability survey.” Second, presuming you mean “non-probability sample,” you do not understand what a non-probability sample is. Various things you appear to believe are non-probability samples are, in fact, based on probability sampling methods, albeit with poor response rates or other issues. Third, if you have an actual argument regarding probability theory, I’d be charmed to hear it. My belief is that you must be in actuality a more lucid guy than how you come across in forums like this. Fourth, I’m not sure what project you are referring to. You may have me confused with Michael Rosenfeld, whose paper with Reuben Thomas using data from Knowledge Networks (GfK) was published in ASR last year. Or, you could be referring to TESS, which involves my fielding survey experiements on behalf of other investigators. Finally, I’m making arguments here that involve reasoning and evidence; trying to twist everything into a snarky personal accusation only underscores your seeming inability to do the same.
LikeLike
@Sherkat: It is disheartening to read your posts and blog entry about contemporary public opinion polls. You argue that the findings from these polls are bullshit because of the quality of the associated data collection process and you explain that your professor would have failed you in undergraduate research methods if you obtained a response rate of 20%. However, you were an undergraduate student in the late 1980s and a doctoral student in the early 1990s. Thus, your implicit assumptions are that (1) the survey research environment (and people’s willingness to take surveys) has not changed meaningfully in the past 30 years, and (2) that the lessons you learned then stand the test of time against conflicting public opinion research that has emerged during the past 2 decades. This research demonstrate that response rates have declined substantially across modes of data collection and that lower response rates do not necessarily indicate bias in the resultant data (see references below). Additionally, and despite low response rates, data from Pew and Gallup and other comparable polling organizations (NBC) are now routinely used in studies published in top political science and communications journals (e.g. Public Opinion Quarterly) to assess trends in public attitudes toward various topics. And even those studies using the coveted Knowledge Networks (KN) data actually have very low response rates. Specifically, while KN generally obtains a high response rate from the original RDD recruitment, by the time the survey is completed the overall response rate is almost always below 20%. For example, Rosenfeld and Thomas (2012) recent ASR article, which used KN data, reported an overall response rate of 13% (see p. 528).
What is perhaps most important is that you serve on the editorial boards of many top sociology journals. Thus your hard line, and arguably uniformed views, about what constitutes quality survey data serves to restricted high science to one of three types of researchers: (1) those researchers who happen to study topics (e.g., religious affiliation) that are routinely tapped in large surveys like the GSS and ANES; (2) senior researchers who are able to acquire significant grants and thus can afford to field a survey over several weeks and to employ prepaid incentives, multiple modes of data collection, etc.; (3) qualitative researchers who can analyze data from small convenience samples and yet slip past your keen eye for “crap” data; (4) those researchers who don’t use the AAPOR calculation for response rates and thus can report a response rate of 50+%. Because you are a reviewer on so many papers at top journals, your ideological position on survey research hurts the field. Some of the greatest sociological insights come from analyses of survey data that would apparently not meet your methodological standards – e.g., Robert Sampson’s work on criminal offending over the life course using the Glueck data – and thus, if you were a reviewer, may never have been published.
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2000. The effects of telephone response
rate changes on the index of consumer sentiment. Public Opinion Quarterly 64: 413-28.
Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. Changes in telephone survey
nonresponse over the past quarter century. Public Opinion Quarterly 69: 87-89.
Groves, Robert M., Floyd J. Fowler Jr., Mick P. Couper, James M. Lepkowski, Eleanor
Singer, and Roger Tourangeau. 2009. Survey Methodology 2nd edition. Hoboken, NY: Wiley.
Keeter, Scott, Carolyn Miller, Andrew Kohut, Robert M. Groves, and Stanley Presser. 2000. Consequences of reducing nonresponse in a large national telephone survey. Public Opinion Quarterly 64: 125-148.
Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best, and Peyton Craighill. 2006. Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey. Public Opinion Quarterly 70: 759-779.
Pew Research Center. 2012. Assessing the Representativeness of Public Opinion Surveys.
Washington, DC: Pew Research Center.
LikeLike
The response rate only matters to the extent that there’s a response bias related to what we’re trying to measure; if there’s really no response bias, a .001% rate would be just fine. But we so rarely have any clue what this bias is. This time it didn’t matter for predicting the vote (or aggregated out across different samples), but there was surely response bias consistent across these polls related to a number of other variables: these samples would be terrible for measuring levels of paranoia, for instance, or boredom, but also less obviously-susceptible things. We really need a good idea the of the typical response bias on every specific measure we want to use from a low RR study. And even then, we would be susceptible to shifts over time; we can’t dismiss the possibility that some types of voters will start cooperating less/more in 2014.
LikeLike
A different aspect of all this that has caused me to examine one of my own biases is the success of YouGov and Ipsos’ experiment with internet polling. Initial analysis puts them near the top in terms of accuracy. As with the low response rate issue, the question is certainly what types of questions internet polling will be work for, but I must admit that throughout the election season I have been assuming this would be a failed experiment. Now I’m hoping to see more work on the possibilities of internet based survey approaches for sociology.
LikeLike
You beat me to it – this was one of a few random musings about the election I was going to post about, but Jeremy did it better and faster.
The fivethirtyeight model isn’t just poll aggregation – it takes into account various parameters about, for example, donations and the economic environment as well.
More importantly, polling about vote intention is the low-hanging fruit of public opinion in general, since it’s concrete, predictive, and well understood by the people being polled. It’s also, for all practical purposes, just a two-stage binary decision: will you vote, and if so, for which of the two major party candidates? It’s not clear that Silver’s success in aggregating these answers should be expected to translate into similar success in other areas of public opinion (e.g., approval, preference for policies, views of various groups, etc.) whose object is more complicated than vote intention.
All that said, I think a particularly urgent task for people with a more, er… constructive attitude than Sherkat is seeking to estimate nonresponse bias as opposed to just rate in these sorts of surveys applied to some of these more complicated outcomes. I have a bunch of data to do that and hope to get to analysis in the not-too-distant future.
LikeLike
I’m sure there are people out there who have actually gone over these numbers in fine detail, but something I don’t know is whether or how much Silver’s predictions were actually helped by the non-poll-aggregation parts of his model. My understanding is that, in Montana, he predicted that Tester would probably lose despite a poll average that favored Tester.
LikeLike
I agree with the main thrust: this is easier to do with low response rates because it’s simple, well-understood and repeated. I am also curious about whether the binary response, the near-even split, and the univariate analysis help. That means even a crazy lazy Cheetos-eater can only throw you off a little.
LikeLike
My intuition about this has been that bias would be lower for estimating bivariate/multivariate parameters than univariate parameters. In other words, that regression coefficients would be more robust than simple estimations of proportions, so that nailing estimates of proportions is comparably even more impressive. At this late-night moment, however, I am not sure whether this “intuition” reflects dowsing-rod like sentimentality, or something I could formulate an actual argument behind. Opinions welcome.
LikeLike
There are a lot of issues floating around here but I want to come back to the initial one Jeremy raised: is Nate Silver’s win sociology’s loss? My answer is definitely no. I think it means that survey research — one of our signature methods — is in better shape than we thought it was. One of the motives for the push toward big data, for example, has been that surveys are dying as response rates go down. I’m all for big data, too, but I’m heartened to see that the old workhorse still has some legs!
LikeLike
I forgot to say: sure it’s a loss for a particular kind of conventional wisdom and misplaced skepticism characteristic of the discipline but not for the discipline itself. It’s actually great for the discipline.
LikeLike
EXACTLY, I should have brought out that aspect more. I think if one holds to traditional views, the conclusion has to be that social surveys are getting worse — a quite unusual problem in scientific instrumentation. As a prominent survey methodologist said to me awhile back, one could even describe the integrity of surveys as “deteriorating by the day.” And yet, here we are, triumphant about some uncannily accurate forecasting based on survey research that, from a response rate standpoint, should exemplify the problem.
The future of population survey research, I think, is still very much uncertain. For example, will we still be using population surveys to estimate the unemployment rate 20 years from now? 10 years from now? If I had to wager, it would be that if so, surveys will be part of a complicated methodological cyborg that involves parallel use of administrative records.
LikeLike
I think you’re right about matters such as unemployment and voting. I continue to be skeptical that cultural and attitude constructs follow the same rules — that is, that questions like “who are you most like” or “which of the following groups do you like least” can be assumed to follow similar patterns to “for whom will you vote next week” for the purposes of sampling.
LikeLike
(Previewing tomorrow’s post, spoiler alert): People who watch Top Chef vote for Obama. The cross-state pattern is too strong to be a coincidence. Also, Obama voters Google “spliff” and “pass the Dutchie.”
People who Google stuff about home schooling, “clean funny jokes” and “founding fathers quotes” vote Republican.
That’s just based on a little free data. If we had access to the shopping masterfile we’d be completely set. One good survey – make it mandatory, big sample – to build a model tying political views to personal habits, and we could solve the problem.
Voting itself is too susceptible to flukes like storms and robocalls telling people not to bother. We sociologists can figure out which candidate “the public” really wants and have them declared the winner. It would be more democratic.
LikeLike
This is the heart of the matter, and thanks for bringing the discussion back to this.
LikeLike
It is very difficult to know how much response rate matters knowing so little about the underlying microdata. Do some pollsters just do variants on quota sampling by demographic characteristics or are some of them using county-level marginals, etc., to develop good non-response weights? I just have no idea.
And how does one then compare one-shot samples to things like the Gallup panels? If those panels have no attrition, then they are a weird group of respondents! If they have attrition and are freshened, how do they reweight the samples? Very curious.
Perhaps the most interesting thing, I think, are the likely voter models and whether this is the primary source of house effects. It seems like what Silver is doing is mostly averaging over likely voter models by systematically downweighting polls with large house effects, which I assume (based on no evidence!) is largely produced by likely voter modeling rather than sampling differences.
LikeLike
Andy: as long as you acknowledge that your skepticism is dispositional and has no empirical basis, you’re entitled to it! ;-)
Plus, what Claude said.
LikeLike
Not sure if my skepticism has an empirical basis because the response rate was too low to determine that. It is certainly dispositional, even habitual, as well.
LikeLike
Silver writes so much that I presume he has gone into length sometime about what accounts for house effects. Obviously, they could reflect at least four things: (1) sampling strategies, (2) differential recruitment [the nonresponse part], (3) instrumentation differences, and (4) reweighting methods [the likely voter model part]. I agree with you that, apart from maybe gross mode differences [like robopolling vs. live interviewers], the biggest source of house effects seems like it would be (4), but I don’t have any evidence for that intuition either.
LikeLike
Heard Nate Silver on Morning Joe today. Sounds like a major source of house effects is indeed sampling (the various auto-pools don’t try for cellphones and thus undersample democrats) and likely voter models (those that did better this year take the samples as is without weighting to prior distributions of voters). The latter allowed the most ex post most accurate polls to pick up the surge in new Hispanic voters, which Republican-leaning pollsters were downweighting in their pre-election samples. So, this is an interesting case of the obvious (sampling matters) and the not-so-obvious (not having a likely voter model helps).
LikeLike
Re: Andrew —
I refer back to the Putnam 2000 appendix, where his tiny-weird sample data set seems to match up with the GSS on a variety of variables.
LikeLike
Beyond the Putnam example, there’s of course a lot of low-response-rate public opinion poll data on topics similar to what are asked about in GSS/ANES. Which is not to say that I know what the conclusion of such a comparison would be. Beyond the difference in response rates, there are also differences in mode, question wording, context, etc.. So if there weren’t lots of systematic differences, it would be all the more striking.
My understanding is that there going to be a systematic effort this year to replicate the ANES using KN/GfK, but I don’t know what the timetable for that data would be.
In any case, it’s also worth keeping in mind that there are basically two separate issues: (1) what kind of non-response bias you get as you go from full participation to GSS-level-nonresponse, (2) what kind of bias you get as you go from the GSS-level nonresponse to current poll levels of nonresponse.
LikeLike
My point is certainly not that response rate always signals bad data – I think response rate is routinely overused and response bias routinely underassessed. What I was saying was a theoretical point: that concrete behaviors are easier to predict than attitudes and dispositions, which have no “gold standard.”
LikeLike