the public-hype/professional-caution duality

I’ve been reading and thinking a lot about the “replication crisis” in experimental social psychology. One complaint that a psychologist made about her field really struck me, but not for the intended reason:

Findings in papers are often hyped in a way that is more appropriate in a press release than in a scientific paper.

Of course, her complaint is that authors are overselling the findings in their papers. The “crisis” in psychology is that parts of the discipline are replete with practices that are wonderful for generating a published literature full of interesting findings but willfully weak in terms its filtration of the interesting-and-true from the would-be-interesting-except-it’s-wrong.

But you can turn the sentence on its head. How should we think about the idea that it’s more acceptable to hype findings in a press release than when communicating with other experts?

Presumably, given the same overhyped description of a study and its findings, the experts are the ones would better able to recognize the overhype for what it is, and to put it in a context that would lead them to draw a more accurate assessment. We’d expect laypeople to be less able to recognize what’s hype and what not. So the scientifically-interested citizen either ends up being overly credulous toward everything they read–and harboring a bunch of misunderstandings that way–or, if their bullcrap detectors go off often enough, maybe some people form a generalized distrust of science.

Either way, if we prefer a world in which the things people believe are true, the epistemic harm of hype is greater when perpetrated upon the public than upon peers. And yet the thinking is the opposite: that we owe our colleagues cautions and qualifications, and just owe the public a good story.

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

6 thoughts on “the public-hype/professional-caution duality”

  1. So comment this is doubtless self-serving, but I feel like I’m sitting on a project that went into the weeds because I was trying to make sure I had results I could believe in instead of just publishing and interpreting the numbers however they came out.

    I’ve talked to the press A LOT in the past ten years. The one time I issued a press release was early on, and it was about an indisputable descriptive fact that the news media refused to pick up. It eventually got printed and got a lot of press but only through a circuitous political process that turned me into a local expert on that.

    I’ve spent a lot of time educating reporters on the issue. There are roughly two classes of reporters with respect to social science. Some are taking a long view and developing a story; these people are usually feature writers and often go to the trouble to develop expertise in an area over time. They know what is going on and respect sources who take the trouble to explain complexities and get facts right. Other reporters, most often writing to a deadline, are new to an issue and don’t want to be bothered with understanding anything, they sometimes just want you to agree to support what they’ve already written, or are just fishing for some quick slapdash comments.

    But all of this press conversation I’ve been mentioning is about some very simple descriptive facts. What I’ve been sitting on is the comparative information about what are the most important explanations for variations between places & times in why that happens. That’s a lot harder and the data are more equivocal.

    Um, sorry for the essay. This is a sore point.

    Like

  2. Other experts can better see through hype, but it’s plausible that in some fields a small amount of misinformation passed between experts (and the associated misallocation of credit) is more damaging than the greater amount of misinformation passed to casual readers who are consuming the “research” as a form of entertainment.

    Where hype hurts the most is an interesting side-conversation, but I always come back to the question: How can we build social and technical systems that reduce the temptation to hype? This should be a top priority in academia and journalism.

    Like

    1. I’ve lost all temptation to hype my next hurricane study. More seriously, do we expect that information in the popular media is supposed to build on prior work (and serve as a stepping-stone for future work) the same way that it does in academic journals?

      Like

      1. The hurricane name study was sort of a perfect stor–oh, I can’t bear the pun. Before it hit I’d already been reading obsessively about the crisis in social psychology, partly for this broader interest I have in coming to terms with all the false/misleading stuff that behavioral science and epidemiology produce.

        I agree with the issue that one can say that details in published articles need to be there in order to allow people to see the path from prior work and to figure out how to make it work going forward. This is perhaps especially so in an experimental science where people want to use the basic idea of the experiment for something new.

        Like

  3. “Either way, if we prefer a world in which the things people believe are true, the epistemic harm of hype is greater when perpetrated upon the public than upon peers. And yet the thinking is the opposite: that we owe our colleagues cautions and qualifications, and just owe the public a good story.”

    I completely agree. I followed the Reinhart-Rogoff scandal pretty closely and it amazed me how much of a pass they got once the whole affair ended. The technical paper – not peer-reviewed, of course – was appropriately hedged. But in their meetings with policymakers, and their published op-eds, they jumped from a questionable correlation to the assertion of proven causality (that national debt above some threshold around 90% reduces economic growth). Even without the spreadsheet errors, this claim was baseless (as has been demonstrated by a lot of reanalysis). And yet, they seem to have washed their hands of the affair and somehow maintained their credibility. I’m flummoxed, but I think part of it has to do with this dynamic: the academic work said all the things it was supposed, and if it ended up being flawed, that’s understandable. The policy hackery, no matter how harmful, just doesn’t count the same way.

    Like

  4. On the main point, I agree with others: the harm you do when hyping a misleading result to the general public is much greater than the harm one does in publishing a misleading result in a journal.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s