the averaged american and ontological crisis

I am perpetually behind on reading journals, so just finished reading Contexts from Winter 2009. It contains, among other things, a thoughtful review by the eminent sociologist Claude Fischer of Sarah Igo‘s book, The Averaged American. (I wrote a while ago about Andrew Kohut‘s review of the same book as well.)

Both Kohut and Fischer detect in the book a strain I did not find: what Kohut calls “poll bashing,” and Fischer refers to more modestly as “fault[ing]…scholars” for “falling short in precisely what they boasted of, accurately representing Americans.” But this critique rests on two pillars, neither of which strikes me as legitimate:

– First, it rests on the idea that transparent representation of “Americans” is a possibility, approached, if asymptotically, by ever-improving technical means. If we read the three cases in Igo’s book as three historically-situated, semiperformative moments–a reading for which there is ample ammunition in the book–the compromises involved in each of them are not faults but rather observational techniques.

– Second, and related, it rests on an overly naturalistic ontology (see this recent article for a pretty good discussion of this problem) in which citizens’ beliefs, ideas, and preferences are both (a) presocial; and (b) stable across settings. In other words, these are properties of individuals, not of times, contexts, spaces, and settings. Hence the pull-out quote in Fischer’s review: “Surveys well done allow us to clamber up the walls around us and get a view of the larger terrain, the multifaceted variety of people’s habits and views.” This is an aesthetic claim, a paean to a representational ontology that is (I suspect, knowing Claude, somewhat willfully) naïve.

In the cases both of Kohut and Fischer, the book seems to have prompted something of an ontological crisis: if research methods create, while representing, subjects, what is a serious social researcher to do? It is precisely this crisis that allows Fischer to end his review thus: “For confirmation–and here is the final irony–we may need representative surveys of Americans’ thoughts and feelings.” What would such a survey look like? The statement refers to Igo’s claim that “…modern survey methods…shaped the selves who would inhabit it [a mass public], influencing everything from beliefs about morality and individuality to visions of democracy and the nation.” Honestly, I am not hostile to standard public opinion research–I field a survey twice a year myself–but I just don’t see how a survey could be designed to answer that question.

Author: andrewperrin

University of North Carolina, Chapel Hill

13 thoughts on “the averaged american and ontological crisis”

  1. Call me naive, Andy, (and, no, I don’t think it was willful or faux naivete, but real innocence), but I’ll stick to the simple point I tried to make in that review.
    Igo recounts the history of survey research well, but also makes undocumented assertions about the social psychological consequences of survey research. One can either decide that such assertions are matters of taste and opinion, or one could at least imagine trying to bring empirical evidence to bear. What evidence could that be? Why not survey research — or some historical version of the same?


    1. I think that evidence might be “emerging” responses to surveys. In other words, trying to figure out how people have changed their responses to questions after being questioned about things. This would be really hard to establish. But one might look at “stable” responses to surveys after people have been asked questions many many times and then figure if those were different than when the questions were “first” asked to folks before they were used to be surveyed. In other words, “The Averaged American” has tended to answer X to Y question (since the 1970s). But back in the 50s the answer was quite different. You’d have to make a leap of faith that it wasn’t a cohort effect or something. So you’d need secondary evidence to support that. But it might be possible.

      I share this concern about the book. But like Andy, I love it.


  2. Note to self: really smart, preeminent sociologists read Scatterplot too….

    Claude, it’s now been a long time since I read the book, and I don’t have my notes in front of me. But my memory is that these aren’t undocumented assertions, but rather assertions supported with documentary and associational evidence–this being, after all, a work of history, not social science, naturalistic or otherwise. In other words, the very fact that letters stopped coming in to Gallup complaining that the surveyors “forgot” to question each individual American, when they had done so in the past, is evidence that sample-based representation has since become a taken-for-granted way for citizens to imagine the public (John Boehner notwithstanding).

    I’m not optimistic about the possibility of evaluating the social psychological claims empirically, because they go to subtle thought processes that were unlikely to be articulated even at the time, and are almost certainly not subject to accurate recall now.

    I suppose if I were to design a study to try to evaluate these, it would have to be archival in nature, since the claim is about a process that is already complete. It is, as I understand it, to evaluate the extent to which Americans “think with” survey research as a cognitive tool for imagining the public. Perhaps a longitudinal examination of letters to public officials that invoke public consent as a rationale for doing, or not doing, something?

    The ontological point is that there is no inherent reason to assume a measurement tool ought to be transparent, or certainly that it can be. That technologies to measure the public — like those to allow individuals within it to communicate with one another, or to move around differently, or to carry out housework differently — change the public itself is certainly an interesting, socially important point. But it strikes me as more surprising that the introduction of such technologies would not change the public than that it would.

    And Claude, I would never call you naive; it’s the particular theory of representation you put forward I would label as such.

    @2.shakha: I don’t think your approach would work. Frankly it’s been done by Page and Shapiro, but analyzed precisely as the stability/cohort effect you suggest would have to be assumed away.


    1. I’m curious, then. Wouldn’t this be less of an ontological problem and more of an empirical problem. That is, if Page and Shapiro demonstrate that public opinion is stable (both before and after surveys become pervasive) and that differences are largely cohort effects, then doesn’t this stand in as an empirical challenge to the claim that subjects have begun to reconceptualize themselves post-pervasive surveying?


  3. Andy: Three quick points (and then I will have used up my fair share of air time):
    (1) This discussion by Shakha and you is exactly the beginning of thinking about what observables there may be that would shed some light on the question. It might come to naught, but we may get some clues…. which is more than we have now.
    (2) If you go back to the Igo book, you will see that the headline messages — say, that social scientists’ reports about “average Americans” led Americans to ape that average — are exactly the ones lacking documentation, usually even footnotes. (P.S., I don’t recall your point that letters about representation stopped.)
    (3) As to ontological issues: Too many syllables in there for me wrap my mind around. I’ll stick with naive — maybe neo-naive ? — empiricism.


  4. I haven’t read the book or the reviews, but it seems like my dissertation deals with exactly this issue, applied to racial categorization in Brazil. I think that qualitative research is really important in this process of evaluating the relationship between survey and non-survey contexts in people’s lives.


  5. lschwart,
    qualitative research is vulnerable to some of the same problems as close-ended (i.e. quant) interviewing though fortunately usually only in attenuated form. the most obvious problem is salience. imagine doing an interview where you ask people some more subtle variant on “so what do you think about the use of opinion polls to socially construct public opinion.” unless you happen to be interviewing a politician, journalist, or sociologist, the only truthful answer is almost certainly “i had never thought about that until you mentioned it.” however most people will feel that this is unhelpful and obligingly make something up on the spot to tell you.

    participant observation is much better than in-depth interviewing in this respect, but it faces the problem of salience in another way in that this issue is unlikely to come up, so now instead of having biased data you have no data. (that is, unless the participant observation involves some kind of polling work, but then you’re back to square one).


  6. There is research doing exactly what Shakha describes. I saw a presentation at PAA on this, by John Warren and Andrew Halpern-Manners. Among other things, they show respondents learn how to answer surveys more efficiently over time, doing things like reporting fewer household members (to cut down on multiple and time-consuming questions, presumably).


  7. lschwart/gabrielrossman: I agree that the issue of salience is very important and interesting, but I’m not sure that’s the central issue here. I don’t see how you could do participant observation on a historical point — you can’t directly observe how people thought or behaved at a prior time!

    jess27: sounds like a very interesting presentation – I’d be interested in seeing it.


  8. I have not read the Igo book (which sounds like interesting history) so I cannot comment on whether she does or does not “poll bash” and, consequently, cannot comment on whether Fisher’s criticism’s are apt or excessively defensive and insufficiently appreciative of the main point. I will say that Andrew’s original post and some of the comments in this thread seem to imply that survey researchers have never thought about the problem of reactivity or the whole social communication issue involved in trying to plug the complexities of people’s ideas or experiences into short questions so that they can be asked of a lot of people or about the way the implications of a question (and thus its answer) vary a ton by both interpersonal and historical context. You might not want to assume or imply that because people do research with short fixed-choice questions that they, themselves, are unable to think in more complex ways than a multiple-choice question about what they are doing.

    As regards what I THINK the main point of Igo’s book is (from looking at its summary), that surveys have shaped how people think about themselves (an thesis that sounds interesting and important), Fisher’s concern appears to be that there is no actual evidence for impacts on people’s self-perceptions, which seems likely as Igo is a historian and probably did not conduct social psychological research but, I would imagine, based her study (as most historians do) on texts, and most likely on mass media texts and perhaps archives from polling organizations, not personal diaries or letters, much less interviews (qualitative or otherwise). If this is true, is it wrong to point out that she lacks data on what people think about themselves, but is instead studying how the media talk about what people think about themselves and perhaps making unsupported inferences from public documents to private thoughts?

    That surveys have had tremendous social and political impact seems like a really important point to make. They have become political objects. They have helped to create the collective identity of the American people. And that is what I suspect Igo’s data show.

    What I don’t follow is why you think the fact (if true) that media discussions of survey research have changed how people envision the society and themselves in it would invalidate survey research. So surveys cannot perfectly represent “American opinion” because there is no such thing. But it does not follow that surveys give us no information. Imperfect information judiciously analyzed and reflected on is better than no information. Nor does it follow that we’d know more if we dumped all the sample surveys and put all of the resources now spent on surveys into interviews and participant observation.


  9. Hmmm. OW, I most certainly don’t think that surveys are invalid or that they give us no information. And I would certainly not advocate putting all the resources into interviews and participant observation — in fact, part of my argument is that these, too, can’t approach the empirical question Claude calls Igo out on. If I have implied these positions, my apologies — they’re most definitely not my views. Not only are some of my best friends survey researchers, I myself am one :)

    If I may digress a bit, I think the issue is that Claude’s review assumes that reactivity and partiality, if true, would indict survey research in general. My view is that Igo (and others) make a plausible argument that survey research, like other attempts to represent The Public, evokes a particular kind of subject in the process of measuring it. This finding is congruent with other recent theory, e.g., performativity, Latour, Foucault, and so on, but this is an affinity, not a strong connection. I do not see this as an indictment of survey research, but rather an imperative to think about the ontological character of that which is measured.

    I am well aware that lots of survey researchers worry about these matters; in fact I’m working on an Annual Review piece about the theoretical and methodological approaches used to handle the problem. My claim to ontological crisis is that one of these approaches, which I’ll call a transparent representational ontology, is that as survey techniques are perfected, their results will asymptotically approach transparency. Transparency in this case means representing fully and faithfully without changing the object being represented. IMHO this is the representational ontology that motivates most of Public Opinion Quarterly, and I personally believe it is theoretically naive.


  10. I have no idea why lschwart’s May 5 9:48pm comment is posted out of date order in this comment thread. Checking the comments list from the WordPress back end, the comment order is correct. Harumph.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: