I recently read a paper in which the author hadn’t authored directional hypotheses. They were just of the form that X was expected to be associated with Y. My reaction was just that a non-directional hypothesis was not much of a hypothesis, and I made a comment along the lines of, “you should probably clarify your ideas until you have more of an idea of how X and Y might be associated before you try to test the hypothesis with data.”
This leads me to try to be thinking if I have a more general position about the specificity required for something to be a substantively meaningful social science hypothesis. Does anyone have an example of something in social science where the hypothesis is non-directional (or is just a hypothesis that something “matters”), and that this hypothesis is not trivial? If so, please let me know.
Kelly pointed to this in the comments on my last post: a CYOA game in which you get to take on different roles and try to prevent a science fraud scandal from happening.
As a connoisseur of CYOA, I don’t think it works that well as a game — too much stuff before and between choices — but the videos have surprisingly high production values and script quality for something like this.
I’m not sure if the author of this post is a graduate student or undergraduate, but I found it an intriguing statement about the problem younger people interested in methodology can find themselves in while working with established people who are very steeped in conventional practices and productivity. Quote:
One thing that never really comes up when people talk about “Questionable Research Practices,” is what to do when you’re a junior in the field and someone your senior suggests that you partake. […] I don’t want to engage in what I know is a bad research practice, but I don’t have a choice. I can’t afford to burn bridges when those same bridges are the only things that get me over the water and into a job. (HT: @lakens)
Mostly this is just a statement about power.* But it’s also maybe a statement about what can happen when developments allow the possibility of radical doubt to settle upon a field. Normally a junior person can have methodological doubts, but still think, “Well, these people must know what they are doing, because it’s been successful for them and so ultimately in practice it works, right?” But what happens when you have developments that lead to a lot of people starting to whisper and murmur and talk about how maybe it doesn’t work?
* I mean power in the ordinary sociological sense, not my ongoing obsession with statistical power.
This is intended as a friendly didactic post, not an addition to my various criticisms of the hurricane name study. But I do use that data and model. Frankly, I suspect I’ll be thinking about the lessons from that study for awhile and using it as a teaching example for years.
I’ve said that substantively it makes more sense to log the measure of hurricane damage, and that the model fits better when you do, even though the key result of their paper is no longer statistically significant. I worry the point may seem arcane or persnickety. So below the jump are a couple of graphs that show the substantive difference that this actually makes over the range of damage observed in their data. (Note the scales of the y-axis.)
For those who haven’t yet seen it, there’s a very interesting article by Colin Jerolmack and our own Shamus Khan, along with critiques and rejoinder. The article, “Talk is Cheap,” examines the fact that what people say is not the same as what they do (the problem of “Attitude-Behavior Consistency,” or ABC). They argue that ethnography is therefore the better way to ascertain behavior because ethnographers actually observe behavior itself instead of actors’ often-inaccurate accounts of behavior. And since sociologists are held to be concerned primarily with social action — an assumption I’ll address below — ethnography (along with, by the way, audit studies such as Quillian and Pager’s) is the better approach.
Laura Nelson has an excellent discussion of topic modeling on badhessian, which in part takes me to task for my comments on the Poetics issue on topic modeling. Unfortunately the diqus system that handles comments there doesn’t like me, and so has eaten my comments twice. So I’m posting them here, and perhaps someone smarter than I am can make them into a bona fide comment on the site.
There is, quite appropriately, a lot of buzz about the potential of “big data” and quantitative analysis of text, in particular for cultural analysis since so much of culture seems to make its way into text in one form or another. The articles in the special issue combine into a grand showcase of the possibilities of quantitative analysis of text. I’ll comment on most of them below. But I think most of them–like much quantitative analysis of text in general–suffer from some theoretical shortcomings. Specifically:
with the partial exception of the Mohr, Wagner-Pacifici, Breiger, and Bogdanov article, the studies lack a well-conceptualized theory of language, which leads to some conceptual slippage.
there is little attention to the conditions of production of text: whose words, and which words, are written down, archived, and digitized.
I got a call this morning from the Daily Tar Heel because, while UNC was dead last among the 94 universities covered in the study Kieran has been mocking for its invention of an MIT sociology department, I am apparently the third-most-impactful faculty member in that dubiouslist. Talk about damning with faint praise.
In response to Fabio’s defense of nonrepresentative sampling, Sam Lucas sent his paper, “Beyond the Existence Proof,” published last year. Fabio mentions Lucas’s article in his follow-up, but doesn’t really address the claims in the paper. I hadn’t seen it before Sam sent it, but after reading it I think it’s really smart and deserves attention in methods classes and elsewhere.
One of my favorite articles to teach in graduate theory is Richard Ned Lebow’s “If Mozart Had Died at Your Age,” (paywall, sorry) which very cleverly lays out a counterfactual theory in which Mozart not dying at 36 changes the aesthetic, thereby the philosophical, thereby the political, history of Germany and therefore the world.
Now we have another example, somewhat (though not a lot!) more pedestrian, in the question of what the world might have been like had the Supreme Court not taken Bush v. Gore. Sandra Day O’Connor has commented that perhaps the court shouldn’t have taken the case, and Mediaite dares to ask: how might history have differed? Check it out – parsimony or contingency? You decide.
Biernacki attempts a wholesale indictment of the practice of “coding” texts as a social scientific technique. Through careful attempts to replicate three studies, Biernacki seeks to show that the attempt to bridge interpretive and analytical sociology by sampling and categorizing bits of text is “unfeasible.” Essentially, I believe he hopes to demonstrate a kind of methodological “non-overlapping magisteria” claim: that interpretive approaches are sui generis and uniquely capable of successfully comprehending textual and cultural evidence, and analytical techniques are epistemologically bankrupt. He does so by a cherished if underused scientific technique: replication, in this case of three important works in cultural sociology. The works are Bearman and Stovel’s “Becoming a Nazi: A Model for Narrative Networks” (Poetics, March 2000); John Evans’s 2001 book,Playing God? , on which Biernacki has already commented extensively and very similarly; and Wendy Griswold’s 1987 “The Fabrication of Meaning”.
I say “I believe” that is the point of the book, because unlike his prior book (The Fabrication of Labor, a magnificent historical study demonstrating the independent effect of national culture on early modern economic organization in England and Germany) the argument in Reinventing is hidden behind a smokescreen of arrogant posturing, making it difficult to evaluate the underlying idea and its defense.
In short, while there are some apt points in the book, in general it is pompous in style, muddled in evidence, vastly overstated in scope, mean-spirited in approach, and epistemologically indefensible.
This morning, US News and World Reports published their graduate school rankings. However, rather than report rankings based on the data they collected last fall, they decided (for the first time in history) to average data collected in 2008 and 2012 to generate many of the lists, including sociology.