Fellow sociologists: have you ever been teaching a class and have felt the need to explain to students that while scientific research is generally a reliable way to gather knowledge, we have to be very careful not to trust our results too much? Have you ever wanted a great example to show why alpha error is a problem, and to explain why findings sometimes have to be considered provisional? Sure, we all have! Only now, I have a way to help. And do you want to know what the best part is?
It involves the brain of a dead fish.
A recent edition of Wired reports on an excellent study that used an fMRI to examine how social imagery triggered brain activation. The catch? The subject was a dead Atlantic Salmon. The results will shock you:
Neuroscientist Craig Bennett purchased a whole Atlantic salmon, took it to a lab at Dartmouth, and put it into an fMRI machine used to study the brain. The beautiful fish was to be the lab’s test object as they worked out some new methods.
So, as the fish sat in the scanner, they showed it “a series of photographs depicting human individuals in social situations.” To maintain the rigor of the protocol (and perhaps because it was hilarious), the salmon, just like a human test subject, “was asked to determine what emotion the individual in the photo must have been experiencing.”
If that were all that had occurred, the salmon scanning would simply live on in Dartmouth lore as a “crowning achievement in terms of ridiculous objects to scan.” But the fish had a surprise in store. When they got around to analyzing the voxel (think: 3-D or “volumetric” pixel) data, the voxels representing the area where the salmon’s tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown.
“By complete, random chance, we found some voxels that were significant that just happened to be in the fish’s brain,” Bennett said. “And if I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state.’”
The result is completely nuts — but that’s actually exactly the point. Bennett, who is now a post-doc at the University of California, Santa Barbara, and his adviser, George Wolford, wrote up the work as a warning about the dangers of false positives in fMRI data. They wanted to call attention to ways the field could improve its statistical methods.
The moral of the story? Quantitative methods rock, but just make sure you understand the theory before you pull that shiny lever in Stata and get a bunch of results.