Fellow sociologists: have you ever been teaching a class and have felt the need to explain to students that while scientific research is generally a reliable way to gather knowledge, we have to be very careful not to trust our results too much? Have you ever wanted a great example to show why alpha error is a problem, and to explain why findings sometimes have to be considered provisional? Sure, we all have! Only now, I have a way to help. And do you want to know what the best part is?
It involves the brain of a dead fish.
A recent edition of Wired reports on an excellent study that used an fMRI to examine how social imagery triggered brain activation. The catch? The subject was a dead Atlantic Salmon. The results will shock you:
Neuroscientist Craig Bennett purchased a whole Atlantic salmon, took it to a lab at Dartmouth, and put it into an fMRI machine used to study the brain. The beautiful fish was to be the lab’s test object as they worked out some new methods.
So, as the fish sat in the scanner, they showed it “a series of photographs depicting human individuals in social situations.” To maintain the rigor of the protocol (and perhaps because it was hilarious), the salmon, just like a human test subject, “was asked to determine what emotion the individual in the photo must have been experiencing.”
…
If that were all that had occurred, the salmon scanning would simply live on in Dartmouth lore as a “crowning achievement in terms of ridiculous objects to scan.” But the fish had a surprise in store. When they got around to analyzing the voxel (think: 3-D or “volumetric” pixel) data, the voxels representing the area where the salmon’s tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown.
“By complete, random chance, we found some voxels that were significant that just happened to be in the fish’s brain,” Bennett said. “And if I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state.’”
The result is completely nuts — but that’s actually exactly the point. Bennett, who is now a post-doc at the University of California, Santa Barbara, and his adviser, George Wolford, wrote up the work as a warning about the dangers of false positives in fMRI data. They wanted to call attention to ways the field could improve its statistical methods.
The moral of the story? Quantitative methods rock, but just make sure you understand the theory before you pull that shiny lever in Stata and get a bunch of results.
This is the best thing I’ve heard all day.
LikeLike
There was a fairly heated debate about the statistical methods in some fMRI studies back in the May 2009 issue of Perspectives in Psychological Science. One interesting post on it is here:
http://prefrontal.org/blog/2009/01/voodoo-correlations-in-social-neuroscience/
LikeLike
Seems odd to draw conclusions about the limitations of quantitative data from a bogus study with an N of 1. Sounds like a good argument for quantitative methods to me.
LikeLike
Depends on how you define n, doesn’t it? From the perspective of the fMRI machine n is more like the number of readings its takes from the subject out of which the image is constructed. From that perspective, the n is substantially larger than 1. As with most illustrations, however, this one is imperfect.
But, leaving aside the specifics of this case, I am very much a fan of quantitative methods and pretty much do all of my work that way. I just hate it when people assume that just because we can assign a number to something, it necessarily makes the measurement more precise. Complex methods can easily become a very obtuse was of hiding lousy theory or data and its important that we remind ourselves of that from time to time.
LikeLike
I agree. Nothing like results to six decimal places in a regression table with an R-square of .02… Seems like what the case is good for debunking is scientism more than quantism per se. It’s not the number attached, but the high-tech certainty (which can be achieved by numbers or beams) that needs to be scrutinized.
LikeLike
Hmm. In my experience, it is the people who have no quantitative training who report six decimal places. People who actually have scientific training get taught about the correct level of precision of estimates in relation to the precision of measurement pretty much on day one. It’s the people without scientific training who think the answer is somehow more accurate with more decimal places. Or that they have to copy the computer printout exactly.
LikeLike
I believe yyikes nailed it: Seems like what the case is good for debunking is scientism more than quantism per se.
It’s a hint at people who make systematic errors by trusting pre-established proxies and causal inferences too much, and who forget to test the logic and background assumptions of their causal statements. But that’s virtually all of us, not just quant jocks IMHO. (I’n neither a quant or a med researcher, and can see exactly what the authors are pointing at.)
LikeLike
The above are good points, but I think the dead fish article is making a more specific (and really important if you are interested in this kind of research) critique. To the extent that I understand it, I think this debate is more about whether the voxels are selected for analysis in a way that generates artifactual findings. Another recent explanation is here:
http://www.nature.com/news/2009/090429/full/4581087a.html
LikeLike