why so much psychology?

(Zeroth in a series) I’ve been interested in the sociology of psychology ever since my dissertation, but the recent dramas in social psychology have made this interest, like Tinder at the Olympic Village, “next level.” (Also, I’ve a genuinely remarkable advisee, David Peterson*, whose dissertation involves a multisite lab ethnography of psychology, and even though we’ve got nine thousand miles between us we’ve been corresponding on this issues quite a bit.)

I’m just explaining here what’s going on if you ever wonder, “Why does Jeremy talk so much about psychology?” Also, I worry that a lot of my concern about psychology appears like it’s strictly methodological, but a lot of the methodological critique adds up to a dire substantive point that I think sociologists should be extreme concerned about-but that’s a teaser for another post.

For now, let me link to one of the latest turns in the drama: a post by a Harvard psychologist arguing strongly against the value of replication at all, by as far as I can tell unwittingly following Harry Collins’s experimenter’s regress all the way to a sort of anti-replicationist fundamentalism.

I might say more specifically about this person’s arguments–although it also could be that I don’t know where to begin–but for the tl;dr crowd let me at least start by pointing out the very end:

As a rule, studies that produce null results—including preregistered studies—should not be published. As argued throughout this piece, null findings cannot distinguish between whether an effect does not exist or an experiment was poorly executed, and therefore have no meaningful evidentiary value even when specified in advance.

* On the market in 2015! Remember the name.

Author: jeremy

I am the Ethel and John Lindgren Professor of Sociology and a Faculty Fellow in the Institute for Policy Research at Northwestern University.

5 thoughts on “why so much psychology?”

  1. How absurd a defense! It’s not quite as bad as seeing STS-style soundbites coming from climate change-deniers, but it’s in the same direction.

    I was talking with a couple psychologists at stats camp today about just this problem. One issue I thought of in that conversation and haven’t seen addressed – though I may just have missed it! – is citations. Suppose we do start publishing null results in some more systematic fashion. Will they ever get cited, even if they are definitive? Suppose a good null result kills a line of inquiry. That’s an incredibly influential paper, but one that might not “count” according to the various metrics we’re all increasingly subservient to. Perhaps this is just more evidence for why citations are a poor measure of influence, impact, or quality, but it’s one more thing I’m now worried about in the push to value replications and refutations.

    Like

  2. I tried a little google-fu to come up with more information on how replication works in bioscience and did not succeed. It is my impression that there is a culture of replication in biological science and perhaps physics in which other labs attempt to replicate any startling new results. But I couldn’t in my quickie search on the subject come up with any information one way or the other. Does anybody else have any sources on this?

    Like

  3. Mitchell appears to be under the impression that researchers can bias effects only toward the null hypothesis: “When an experiment succeeds, we can celebrate that the phenomenon survived these all-too-frequent shortcomings. But when an experiment fails, we can only wallow in uncertainty about whether a phenomenon simply does not exist or, rather, whether we were just a bit too human that time around.”

    But, actually, when an experiment succeeds, we can only wallow in uncertainty about whether a phenomenon exists, or whether a phenomenon appears to exist only because a researcher invented the data, because the research report revealed a non-representative selection of results, because the research design biased results away from the null, or because the researcher performed the experiment in a context in which the effect size for some reason appeared much larger than the true effect size.

    OW, for what it’s worth, new elements are not added to the periodic table unless there is independent verification: http://www.newyorker.com/online/blogs/elements/2013/08/unumpentium-the-new-artificial-element.html.

    Moreover, here are the titles of four Nature articles reporting on replications performed in response to a startling new result:

    Sept 2011: “Particles break light-speed limit”
    Nov 2011: “Neutrino experiment replicates faster-than-light finding”
    Mar 2012: “Neutrinos not faster than light”
    Apr 2012: “Embattled neutrino project leaders step down”

    So this sequence appeared to reflect an example of experimental error causing “success.”

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s