bad science not about same-sex parenting

There’s lots to say about the recent article by Mark Regnerus on outcomes of adults who remember a parent having had a same-sex relationship and the other articles and commentaries surrounding it in the journal, and much has already been said. The bottom line is that this is bad science, it is not about same-sex or gay parenting, and strong but circumstantial evidence suggests its main reason for being is to provide ammunition to right-wing activists against LGBT rights. In this (long!) post I offer my evaluation of the scientific merit of the paper as well as the politics surrounding the papers’ funding, publication, spin, and evaluation.

Background: The so-called “no-differences” claim

The logic of Regnerus’s study is motivated by what he calls the “no-differences” position, the claim that there are no differences between children raised by same-sex parents and those raised by opposite-sex parents. Stacey and Biblarz offered a good criticism of this research program back in 2001. The prior article in this issue of SSR, by Loren Marks with respect to the American Psychological Association’s 2005 brief on same-sex parenting, provides an extended criticism of the scientific process that led to the adoption of that statement. (To my non-specialist eye, that statement is similar to the one from the American Academy of Pediatrics some years ago, in which [full disclosure] my mother, Ellen Perrin, was a key player.) The APA statement holds that:

Not a single study has found children of lesbian or gay parents to be disadvantaged in any significant respect relative to children of heterosexual parents.

Marks takes this statement to task by evaluating the various studies examined for the brief and arguing that none of them is sufficiently scientifically rigorous to justify that summary claim. Regnerus does similarly at the outset of his article, though more sloppily:

Suffice it to say that versions of the phrase “no differences” have been employed in a wide variety of studies, reports, depositions, books, and articles since 2000.

Both Marks and Regnerus make a key epistemological error, whether intended or not. The APA and AAP reports conclude essentially that there is no evidence of systematic difference, and Marks and Regnerus treat this as if the reports conclude that there is conclusive evidence of lack of systematic difference. But they do not, and this is the first dishonesty of the articles. Rejecting a hypothesis is not the same thing as proving a null hypothesis.

The Study

Regnerus’s article is framed as an early report on a new dataset, the New Family Structures Survey, which is also shown in postliterate style here. The study uses a Knowledge Networks sample of about 3,000 respondents born between about 1981 and 1994. It asks about a variety of characteristics of interest as well as for information about the respondents’ families of origin, including a diary in four-month blocks of with whom the respondent lived during his/her childhood between birth and age 18. It also asks:

From when you were born until age 18 (or until you left home to be on your own), did either of your parents ever have a romantic relationship with someone of the same sex?

Respondents who answered “yes” were classified as children of lesbian mothers (“LM”) or gay fathers (“GF”) (depending on which parent they recalled having such a relationship), and compared to respondents from “Intact Biological Families,” (“IBF”) as well as to respondents from adopted, divorced, single-parent, and step-family environments. These categories were treated as mutually exclusive (more on that below).

The article’s single biggest weakness is this definition of same-sex parents, which in turn undermines much of the rest of the study. The definition is not just that a parent ever had a same-sex romantic relationship while the respondent was under 18 (which would be bad enough as an indicator of “gay or lesbian parent”), but that the respondent had to know about that relationship and choose to report it on the survey. So the conceptual definition of “same-sex couples” is thoroughly invalid, as some proportion of these parents was certainly not raising children in a same-sex household. In fact, virtually none of the respondents were actually raised in a same-sex household. The measurement is also subject to substantial and unknown recall bias, as respondents who have encountered later-in-life problems and/or have strained relationships with parents may be more likely to remember same-sex “affairs” and potentially even make them up in retrospect. The article is very careless in its use of language around this point, often referring to “gay men” and “lesbians” when it means “parents whose adult children reported that they had ever had a romantic relationship with someone of the same sex.”

The categories used are, as the article acknowledges, not mutually exclusive in real life, but treated that way for analytical purposes. But their non-mutual-exclusivity is precisely the point: people who fall into the LM and GF categories may quite plausibly also be in any other category as well, and indeed most of them almost certainly were, so the analytical strategy amounts to manufacturing a virtual world that doesn’t fit the data at all, then analyzing that world. Why do it this way? Well, “gay and lesbian parents, as well as heterosexual adoptive parents, can be challenging to identify and locate.” The article goes to great lengths to explain how hard they worked to find the necessary respondents. But trying really hard doesn’t help when the conceptual category is fatally flawed. The parents discussed aren’t, by and large, gay and lesbian parents!

The family diary data collected for NFSS and described in the article would go a long way toward evaluating who the parents were and what the associated family structures were, but those data aren’t used or presented here. So Regnerus had available to him data that would actually address the question of the family structures and relationship statuses of the respondents’ families of origin, but chose not to use them and instead to use a conceptually fatally flawed measure to create fictional groups. I will address some theses on why he may have made that decision below, but strictly on the science, had I been asked to review the paper I would have said there is no point in publishing this analysis when the same data would provide the opportunity for a non-fictional analysis. Gary Gates has made precisely this point as well.

One of the outcome measures, respondent’s sexual preference, is dichotomized as “100% heterosexual (vs. anything else)” (p. 758). This seems designed to show big differences, since kids with nontraditional parents may be more likely at least to question sexuality. No discussion is offered of whether the analysis is robust to relaxing the 100% rule.

There’s a lot made of the relationship between LMs and receipt of public assistance, both in family of origin and currently. The obvious question, even granting the bizarre definition of LMs, is causality – some alternative hypotheses would include that (a) women fleeing abusive husbands sometimes enter romantic relationships with other women; or that (b) given women’s subordinate status particularly with respect to employment, lesbian working-class households have a harder time financially. And yes, the article does make causal claims with this outcome, at least in the conclusion (see below). Perhaps more interestingly, looking at the likelihood of a respondent receiving public assistance based on his/her family of origin having done so: that likelihood decreases by 41% for IBFs (from 17% to 10%) but by 44% for LMs and by nearly 60% for GFs, suggesting that poverty is a more temporary condition for these respondents than for IBFs.

In the “Summary of Differences” section on 764, there’s an attempt to catalog the number of possible between-group differences we might find, out of a total of 279 (the original 40 plus 239 additional). The article offers a count of how many such differences are found (at p < .05) between GFs and everyone else and LMs and everyone else. Recognize, though, that at p < .05, we’d expect to find 5% significant differences by chance alone. Apparently nothing was done to correct for the likelihood of chance associations even though 279 is an awful lot of dependent variables!

In the last two pages we turn to the conclusion, where Regnerus just can’t help but make causal claims, even though elsewhere he calls “implying causation here—to parental sexual orientation or anything else, for that matter—… a bridge too far.” First is the idea that “diminished context of kin altruism” is the overarching causal principle that connects various non-IBF families to the various pathologies he delineates. Finally, and incongruously, the last paragraph leaves behind same-sex parents altogether and expands to suggest that family breakdown in general is responsible for “heightened dependence on public health organizations, federal and state public assistance, ” etc. This is clearly a causal statement, entirely unsupported by his evidence.

The alternative hypothesis with reference to confounding is so well rehearsed that it almost doesn’t bear repeating, but here goes. Even if we accept the empirical claims to association between LM and GF families and the outcomes measured here, if the reason why IBF outcomes are “better” is because parents who “belong” in IBFs stay there, but those who don’t, for whatever reason, leave, then increasing participation in IBF families relative to other family forms will not produce these “better” outcomes. For example, if a biological-parent heterosexual couple stays together even though there is abuse, or even though there is no sexual attraction because one or both members of the couple is gay, it does not follow that the outcomes will look the same as those of happily-married, loving couples.

The science of the article, therefore, fails for two major reasons, either of which is probably a fatal flaw on its own:

  • First, the measurement of the key independent variable (having a gay or lesbian parent) is conceptually incorrect and subject to confounding through recall bias;
  • Second, the analytical strategy assumes mutual exclusivity among synthetic categories that are certainly not mutually exclusive.

As far as I can tell, a perfectly adequate theory that is thoroughly consistent even with this rather strange analysis is as follows. Social policy is based around supporting the IBF. There are substantial costs–financial, emotional, and opportunity–to any lifestyle other than that. Those costs vary by time, place, and specifics, but each is marginalized in important ways. If we are to believe that the LM category means anything at all, apparently lesbian relationships are particularly associated with marginalization, again both financially and socially, and so children who remember their mothers having lesbian relationships are somewhat more likely also to report a variety of other marginal sequelae. This is consistent with Regnerus’s statement on 764, referring to children adopted by strangers: “Given that such adoptions are typically the result of considerable self-selection, it should not surprise that they display fewer differences with IBFs.” This theory, of course, directly contradicts the so-called “diminished context of kin altruism” theory discussed above, and could reasonably be taken as evidence in favor of same-sex marriage and open same-sex parenting, since each of these would tend to result in “considerable self-selection,” including into adoption. Others have made this point, including Will Saletan and Ilana Yurkiewicz.

Returning, then, to the “no-differences” hypothesis, which theoretically motivated the article and which Regnerus, in his self-styled FAQ, says he thought was “quirky”: Following this article, there is no more evidence than there was before of any systematic deficits for children of gay and lesbian parents. The article therefore consists of bad science that offers no insight into gay or lesbian parenting, the ostensible reason for its existence.

Why, then, was it published at all, and why has it received such attention? In the rest of this post I offer some thoughts on the political ecology of the article and its reception.

The funding

Several critics have noted that the research was funded by large grants from two very conservative sources, the Bradley Foundation and the Witherspoon Institute. In general, I do not believe that “follow the money” is a particularly useful way of evaluating either research or politics. Indeed, in many cases I think criticizing the funding source is a way of making an argument ad hominem, that is, discussing who’s saying something instead of what’s being said.

That said, I think there are some important issues here. First, what was the foundations’ goal in making the grants? The article says that they weren’t involved in the research design, and Regnerus’s FAQ just says: “And the Ford Foundation is a pretty liberal [institution]”, drawing the parallel. But unlike Ford, both these institutions have explicit political purposes in contemporary politics; see, for example, Bradley’s list of 2010 public policy “research” grantees, the vast majority of which are not academic research by any stretch. In other words: the foundations must have had some reason to expect that NFSS would provide ammunition for their political agendas, since they essentially only fund things that underwrite those agendas. Regnerus’s personally-stated views and track record of promoting “traditional” sexual morality are probably part of what made the funders comfortable with him, his prior most-famous item being a silly essay in Slate on sex markets on campuses, which was in turn the subject of a fabulous take-down on Crooked Timber.

Regnerus has said that government funders–generally considered the most prestigious, rigorous sources of research funding–“don’t want to touch this stuff,” “I actually don’t think a study like this would fly at NICHD or NSF.” This seems unlikely to me (see, e.g., this NIH RFA). Was there any attempt to garner support from sources without an ax to grind? I think it’s a better bet that the research design wouldn’t have passed peer review at a serious science funder such as NIH or NSF, and Regnerus preferred not to have to modify the research design in order to do so. One prominent scholar told me:

I was asked to serve on this panel [for NFSS] but declined, explaining to Regnerus that it was clear that the design of the study was biased toward finding bad outcomes for LGBT parenting. Unless there was a possibility that the basic design could be altered (which he indicated there was not), I said that I could not participate. I also questioned his qualifications for serving as a PI on this study.

In my view, all this is evidence that the study was designed to achieve results that could be spun in favor of the funders’ chosen outcomes. If I’m right about that, then the critique is not due to the funders per se, but rather due to the scientific decisions that made the funders comfortable with offering the funding.

Journal review/timing irregularities

Several critics noted that the turnaround time between submission and publication was extremely fast for social science, and way out of whack with the equivalent times for other articles in the same journal and issue. I asked the journal editor, Jim Wright, about this, and he said:

The ‘average’ time between submission and acceptance that is being bandied about in the blogosphere includes the time authors sit on their R&Rs and is therefore misleading.  We always ask reviewers to get their reviews back within four to six weeks.  Some do, many don’t…. Regnerus’s reviewers were quick, and so was he.  (For the record, it was obvious to me that the Regnerus submission had been heavily reviewed and rewritten many times before it was submitted, as one would expect when dealing with such a hyper-charged issue.)

SSR is a respected journal (I’m proud to have published there), and Wright an accomplished scholar and editor, but I do think the speed of the reviews is so unusual as to be worth further consideration. SSR is unusual among social science journals in using single-blind review, meaning the reviewers see who the author of the submission is, and the editorial board contains at least two members (Brad Wilcox and Chris Ellison) who are generally sympathetic to the political leanings of the project and the article. Were potential reviewers primed ahead of time, either by the author or others? Were they in some way connected to the project already? Wright told me:

As you know, peer review is an anonymous process.  There were seven reviews of the two papers, only one of which, so far as I know, had previous ties to the project….

And later:

One reviewer refereed both papers.  And in case it matters, it now appears that two of these six had prior ties to the Regnerus project, although I was only aware of one of these at the time.

Good to know, but it does seem to me that even one reviewer with previous ties to the project is a conflict of interest on the part of the reviewer.

Philip Cohen and Neal Caren have pointed out that the timeline is not just fast but virtually impossible, since the date the final data were delivered to the project was 23 days after the original submission and just 5 days before the revised submission. Regnerus says in a response to Cohen’s post that he just submitted based on incomplete data, then added the final data in at revision time because “the story didn’t change.” This seems particularly surprising, though, given editor Wright’s observation that the manuscript had clearly been through much editing and revision before submission.

All this is to say that the publication process is very unusual for social science, but I don’t see any “smoking gun” to suggest that the journal did anything wrong. However, given the poor quality of the article itself, its reviewers clearly were asleep at the wheel. Jim Wright also told me he plans to publish “serious, civil, science-based critiques” on the article in a future issue of SSR.

The spin

Based on all the above, I think it’s safe to say the main point of the article was to provide an opportunity for political spin, which Regnerus wasted no time in launching. From a philosophy-of-science perspective, what a scholar who favors a particular hypothesis–as Regnerus clearly does–should do is to try hard to falsify that hypothesis. That, in turn, would make the surviving hypothesis that much more believable. Regnerus is, of course, far from alone in not having taken that course of action, but the fact that he didn’t lends credence to the theory that the article’s goal was to provide the funders and allies with political ammunition. As I suggested above, I think this is likely one of the reasons for choosing the fictional approach using the ever-had-a-same-sex-relationship variable instead of carrying out a serious analysis that actually addressed the real issues using data that were available in the same dataset.

The right-wing blogosphere wasted no time trumpeting the findings, at National Review (also here), Fox News, and the Heritage Foundation, among others. Many of these articles are cataloged on the NFSS homepage. Among those headlined, quite a number refer to “same-sex parents” and “gay parenting,” both of which are literally inaccurate even on the study’s own terms. Others refer to “gay” parents, which is also surely inaccurate but it’s an inaccuracy drawn from the article itself so I suppose it’s slightly more justified.

There are also lots of good, substantive, and solid critiques of the research from both political and social scientific angles, including by Philip Cohen, CJ PascoeChristine Woodman, and the Box Turtle Bulletin. The overwhelming sense of the journalistic findings as well (see, e.g., The New Yorker, The New Republic and Slate) are that the research is slipshod and, even if we were to accept the research, the gratuitous right-wing spin is certainly not supported. The position of Will Saletan–a thoughtful, hardly left-wing, commentator–is actually that the findings provide support for same-sex marriage, since the instability that is the likely mechanism for any demonstrated deficits would be mitigated by removing legal barriers to same-sex families. Editor Jim Wright’s “standard take,” which he sent me, is similar:

The children studied in this survey were raised in an era when it was legally impossible for their parents to form normal marital unions, when gay people were subjected to hostilities and prejudices of the worst imaginable sort, and when their children could expect to be stereotyped and vilified by their peers and others. The hypothesis that these children would not suffer lasting effects from this sort of social environment seems implausible in the extreme. I do not see that is damaging either to the parents or children to call attention to the formidable difficulties gay parents must have faced (and still face) in trying to raise their children, or to the consequences for these children that are still detectable years and even decades later. To the contrary, these strike me as precisely the realities that must be acknowledged and faced if we are ever to progress beyond our current heteronormative bigotries.

This position, of course, is utterly absent from both Regnerus’s spin–which is, essentially, that two-parent, married, opposite-sex families are the gold standard for family life and are causally related to children’s success–and from the right-wing echo chamber that has picked up the article and run with it. The defenses of the critiques have been utterly predictable. Heritage calls them “liberal intolerance,” preferring to consider it entirely as an ideological contest instead of reading the substance of the critiques. Regnerus, having begun and licensed the spin, retreats to a “who, me?” stance:

there’s no ‘Christian’ approach to sampling or ‘Catholic’ way of crunching numbers. Any trained methodologist, data manager, and statistician can locate the same patterns I reported.

But, of course, any serious sociologist would recognize that these patterns, as reported, are meaningless and misleading.

Finally, a group of sociologists, generally of religion, and who generally tend to agree with Regnerus on social matters, offer a defense of Regnerus on the grounds that, well, other studies are bad too. They call their response “social scientific,” but it is only such insofar as the authors are social scientists; no actual social science is brought to bear on the matter. They do point to an interesting, forthcoming study by Daniel Potter, which does a far better job addressing similar questions–academic achievement by children of same-sex parents–using different data. The defenders say Potter’s study “comes to conclusions that parallel those of Regnerus’s study,” but in fact the Potter study shows that all the differences observed in children of same-sex couples are fully explained by differences in family instability! In other words: stable families are good for kids, and the sexual orientation of their parents doesn’t matter. Imagine the headlines if that had been Regnerus’s finding!

Author: andrewperrin

Johns Hopkins University - Sociology and SNF Agora Institute

79 thoughts on “bad science not about same-sex parenting”

  1. Just to be clear, many sociologists of religion have liberal political views, support the rights of same-sex couples, and oppose incorporating our political beliefs into our research. Your blog never says we are all conservative. Nonetheless, I worry that our research only becomes controversial when it has a conservative bent, and then conservative sociologists of religion defend it, which makes us all look like we are practicing Christian apologetics.


  2. Anyone out there know much about this Knowledge Networks panel that is used in this study (and likely, will be used in future studies)? My understanding is that people are recruited into a panel over the phone, and then paid to be surveyed (as folks pay Knowledge Networks) to do surveys for them. Anyone know more? Do subjects know what the survey is about before they consent? Does the panel continue to exist, being paid for survey after survey? I’m curious here.


    1. As far as I know, the KN panel is a relatively strong data source. I did in-depth interviews a couple summers ago with people who were part of the sample. KN contacts a random sample of people, inviting them to join, and asks them if they have access to a computer and an internet connection to take surveys. If they don’t, these are provided for them. I think that it’s generally the same sample used across surveys, as my interviewees regularly participated in surveys. The email invitations might provide a general theme – and certainly this one might have said “New Family Structures Survey” – but not detailed information. I think, like Andy points outs, that the measures/interpretation are much more problematic than the sample or recruiting strategies.


  3. What I’m curious about is: given the study is seriously flawed and given there’s apparently some “irregularities” in the review process, should there be a call for a retraction by the journal and/or author? What is the threshold for a retraction?


  4. Knowledge Networks is a snake-oil selling marketing firm which recruited a non-random sample of internet trolls which they then matched to population parameters to make it “nationally representative.” They keep their Cheetos munchers filling out surveys on whatever they get contracted to do. It’s a good place to get quick, cheap, bullshit. No study should be published in a legitimate peer-reviewed journal using data collected in that manner.


    1. This is false. KN is based on a sampled population. There was a POQ article comparing properties of several Internet company samples, for anyone who cares to be actually informed on the issue.


      1. This is not a dig on KN, but I noticed this is the breakdown of “lesbian mothers”‘s race/ethnicity: 45% white; 26% black; 17% hispanic; 12% other. Something.


      2. Yes, I read the POQ article it doesn’t say what they did using any specifics, and instead does a bait and switch about how to evaluate web-based “sampling”. I’ve also perused the KN “technical reports” which are vague about several things. I was wrong on one thing, most (but not all) of their panel was recruited based on a probability sample of households. But, we’re not really told what response rate for recruiting participants in this strange marketing endeavor. . So, please, if I’m missing something, tell me—what is the response rate from that supposed probability sample? And how do you justify web-recruited additions to the sample? What is the attrition rate for that? How should be think about voluntary and involuntary (?) attrition? I read the POQ piece the same way I’d read a drug studied authored by people at Glaxo…


      3. Having errors in the raw data doesn’t make the survey bullshit. But you gotta clean it up before you analyze it. If you’re studying virginity, you don’t ignore the inconsistencies, you come up with a rule for handling them. And if someone gives a lot of bad answers you have to consider dropping them. And if that messes up your sampling and weighting scheme, you have to deal with that, too. And you have to look for patterns in the errors – for example, why are 25% of the lesbian mothers Black? Real or not real? If you ignore this your analysis is trash.

        Substantively, this false positive problem is huge for this bogus paper, but also for thes “new family structures” survey altogether. It doesn’t seem salvagable. This exact problem seriously undermined the homogmous-couples estimates in early Census data, as Gary and later the Bureau’s own analysis showed. I dont know why that didn’t occur to me. In the Census case random error on sex was a big problem – and that problem is probably a lot smaller than this.


    2. KN is the vendor we use for TESS, so I have a pretty extensive amount of experience with them (> 100 studies). Given that there are funding agencies, collaborators, and investigators involved with that, I should probably write a proper post sometime about my experience with them if I want to get into this in detail, and I probably shouldn’t be throwing out figures if I’m not going to get into it in any detail.

      But, yes, you are absolutely correct in supposing that their “response rate” for individual surveys may cause sociologists to do a double-take, although sociologists can have some very peculiar ideas about both the principles and practicalities of survey error. To be a respondent in a KN Panel survey–which is what I know about, I haven’t been following the Regnerus affair–one has to agree to be empaneled and do the core profile survey and do the particular investigator’s survey. More importantly, though, their completion rates are reduced by coverage improvements; by far the easiest way to improve response rate is to kick hard-to-survey folks out of your denominator.

      You appear to have very strong negative feelings about KN, to the point of apparently believing that all work using them should be banned from peer-reviewed social science. I’d be interested, publicly or privately, on whether that’s based on anything more than the response rate issue. TESS’s experience with KN has been satisfactory to date, which is why we’ve kept coming back, but of course our ears are always alert to others’ experiences.

      That said, TESS is about survey *experiments*, so our primary competition are studies using only college students or, increasingly, subjects from platforms like Mechanical Turk. I wouldn’t support KN taking over the GSS anytime soon–although ANES will be doing a parallel KN administration this year–but response rate issues are much less pertinent for us than for non-experimental population studies.


      1. When I first read what they are doing and how they recruit subjects, my first reaction was “what kind of cheetos munching wierdo would agree to participate in that?” I think we have to ask, are such cheetos munchers representative of the population we seek to describe or the inference we’d like to make about those populations? Gee, I need some kool aid after all those cheetos!


      2. I will admit, I don’t really know what it means to be a “Cheetos muncher.” My own Cheetos consumption is circumscribed by the junk calories and orange hands.

        Anyway, your position could be applied to the whole of contemporary non-governmental telephone surveys (honestly: who talks on the phone anymore?). And basically any SAQ study. That may be your point, I’m not sure.

        Of course, it’s not acceptable for social scientists to only question the continued validity of surveys in response to media coverage of findings that folks politically disagree with. If the methods are the same, the political baby and bathwater have to end up in the same place.


      3. I don’t know much about the details of it, but I do find Darren’s rather vociferous antipathy to KN curious. Given how difficult it is to recruit and maintain respondents for any survey panel, I dont think the cheeto-eating objection holds water.


      4. Somebody needs to enter the Cheetos thing into urban dictionary. Also, if trying to conjure a mental image of the type of person who is most eager to participate in low-payment Internet surveys, it may be helpful to consider that 70% or so of folks who join an open Mechanical Turk call are women. It’s more a “stay at home mom” thing than a “guy who lives in his parents’ basement wearing a diaper” thing.

        Incidentally, I just now received an e-mail from GfK, the company that purchased Knowledge Networks about six months ago, that they are going to be retiring the “Knowledge Networks” brand. So, whatever opinions you’ve formed about KN to date, be prepared to transfer and modify as appropriate to “KnowledgePanel” and whatever other names they have for data products. That part is a story in progress.


      5. Apropos of “who talks on the phone” I know I’ve become an aggressive non-respondent even to legitimate surveys. There are just to darn many telemarketers, political pollsters of a wide variety of stripes, and other strangers I don’t want to talk to ringing my phone every evening. We’ve just installed caller-ID. My children use only their cell phones and rigorously monitor who has the number because they don’t want to pay minutes for somebody else’s purposes. I’m not sure any valid survey research is every going to exist any more.


      6. The last poll call I got offered me no less than a free cruise for answering four ridiculous questions. On the flipside, one of the objections that Republicans have to the American Community Survey is that participation is required by law.


      7. Interesting issue with the KN panels. Scott Rose suggests that panel respondents are used to the screener questions, so if they want the incentives for participating they need to answer “yes” to a screening question such as, “From when you were born until age 18 (or until you left home to be on your own), did either of your parents ever have a romantic relationship with someone of the same sex?”

        I don’t know how their incentives work, but it seems like this could be a problem. Here’s his post:


      8. This is the same person who’s compared the BYU Sociology department to the Ku Klux Klan and keeps repeating that it’s unethical for Regnerus not to be pursuing NIH funding even though their funding line is presently north of 10 percent (and he would have no chance in hell even with a lower funding line), right? I’d be interested in the basic question of whether folks think there is such a thing as “too far” where Regnerus is concerned.

        That said, this is a very interesting point. I don’t know what the Regnerus survey looked like and so how the screen worked. Does everything really depend on that one question? If that’s the case, I’m not even sure you would need an incentive-screen scenario, although that makes it worse.

        As I was just discussing with someone in an e-mail exchange, the KN Panel core profile has a transgender question, and I think 1% of the sample says yes to that, which I take to be obviously inflated and reflective of the fact the one really needs to be careful and thorough with identifying rare populations in panels (especially online panels)–we are talking about a situation where less than 1% measurement error could yield big problems. I don’t know why that point hasn’t been hammered home more in the critiques of Regnerus’s study, frankly.

        Anyway, I don’t want to come across like some wild-eyed defender of all things KN, especially because of potential issues like this. I piped up because somebody said something that was patently false about how the KN panel was drawn and proceeded from that to a claim about how nothing using KN should be acceptable through peer review. This isn’t quite as misguided to me as believing that the way to rebut the Bell Curve is to discredit NLSY79, but it’s in the ballpark.


      9. Jeremy – yes, the full Regnerus article relies on that one question, for reasons that aren’t well spelled out in the article.

        My reading is that Rose is very extreme and comes across as unreasonable. The decision to use KN is certainly not an example of scientific misconduct, as he seems to imply. There’s enough wrong with the article and the decision to publish it without adding new, questionable accusations.


      10. No endorsement of the full Rose treatment should be inferred from my quoting his post. I thought it was an interesting point I hadn’t thought of, and credited him for it.


      11. the full Regnerus article relies on that one question, for reasons that aren’t well spelled out in the article

        Wow. Ha. Well, one theory would be that relying on a single yes/no question to identify a very rare population means that one is going to get a lot of false positives who share a number of the negative characteristics that go along with being bad/inattentive/weird survey respondents. My guess is that’s a bigger source of potential bias than that the item doesn’t actually ask about same-sex parenting.

        I mean, am I right that even if 1 in 150 true no’s clicks the yes radio button, for whatever reason, this ends up being a large portion of the resulting total sample of yeses? Crazy, crazy.


      12. Seriously, did you just try to compare KN to NLSY? You have a conflict of interest, and while I’m sure you have your reasons for wanting your data to be on the mark, to make a comparison between these KN data and NLSY is a disservice to real science. Neal Caren pointed out some really interesting distributions in the KN data analyzed by Regnerus….

        Q3: Two respondents had mothers more than eighty years old at the time of birth
        Q11: Eight respondents have lived with a dozen or more significant others for more than four months at a time, and one (Q100) has lived with 90 or more girlfriends.
        Q12_A/B. Four respondents have had eight or more spouses.
        Q46. Four respondents are retired.
        Q56. Five respondents weigh less than 80 pounds.
        Q82_F. Fourteen respondents use non-marijuana illegal drugs (e.g. cocaine) every day.
        Q87. Nine respondents were first arrested prior to the age of four.
        Q114. Twenty-six respondents had vaginal sex before the age of eight. [could be non-consensual, so may not be data errors]
        Q117. Twenty respondents have had sex with more than 100 women, compared to only 16 who have had sex with more than 100 men.
        Q119. Three respondents had sex with more than fifty women in the last year, but only two had that many with men.
        Q131. Two respondents were pressured into have sex with their parent/adult caregiver for the first time after the age of thirty.
        Q132. Ten respondents have been pregnant a dozen or more times.
        Q135. Fifteen respondents had sex more than thirty times–in the last two weeks.

        I’m thinking it’s the Cheetos…..


      13. Those are troubling. People give bad responses all the time – the question is how do you handle them?

        My impression of Census surveys, for example, is they use a combination of in-survey flags – “whoa, did you just say you’re 150 years old?” – and post-survey corrections (“this guy said he’s an 18-year-old PhD who served in the military and listed his occupation as a dentist. Let’s just make that an unemployed high school grad.”) [Note: fictional examples.] In any case, it’s all more or less logical before it’s released to the public. That’s why you don’t get never-married women living with their husbands, children older than their parents, etc.

        So it’s hard to tell from these frequencies if they have bad respondents, or just bad data cleaning / editing.


      14. In NLSY, there are respondents who, in their late teens or 20s, shrunk several inches between successive waves of the survey according to their self-reported heights. In Add Health, there were respondents who reported having had sex in Wave I and being virgins in Wave II. There was clear over-reporting of students with an artificial limb in the original Add Health in-school survey. Now that panel data in the General Social Survey exist, one can see how there are a few respondents who switch zodiac signs and more than one might expect who switch reports about where they lived and what their families had been like at age 16. These are just the first examples that come to mind.

        Part of what it means to survey a population is that if you interview a lot of people, some portion of them are going to do screwy things. I get that there are some people jumping in to critique Regnerus who may be smart but might not always have that much experience with actual surveys. If the idea is that the existence of oddities and contradictions in raw population distributions undermine a survey in its entirety, a lot of demographers have a lot of grant money they need to refund to the government.

        My view is that the yes/no question issue to identify a rare population seems to me an obvious fatal flaw, especially with a web survey format. If this was what was done in the Regnerus study, and if we are talking about something where the reported frequency of yeses is in the 1-2% range, the design is totally compromised by having even 1 respondent in 150 say ‘yes’ when their true response is ‘no’.

        Indeed, if the sample was really overrun by hordes of mewling sociopaths or incentive-addicted panelists or whatever the claim-du-jour is, it might be taken to imply a practical impossibility of asking a yes/no question that only 1-2% of respondents say ‘yes’ to.


      15. In a fit of nostalgia, just now, I looked back at the Add Health Wave 1 in-school codebook. 1.4% of respondents to the original in-school survey said “yes” to the question indicating that they “used an artificial hand, arm, leg, or foot” for the past twelve months (and 15 persons answered both “yes” and “no”). Fortunately, this was not interpreted as evidence that Add Health was “bullshit” or that the fine folks at North Carolina were “snake-oil salesmen” perpetrating a fraud upon American science.

        Consistent with what I proposed earlier about the nonrandom nature of folks who say “yes” on surveys when the true answer is “no,” my hypothesis would be if one looked at the people who said “yes” to the artificial limb question, they would be worse on a variety of different outcome variables, regardless of whether there is any true effect for the minority of “yes” respondents who actually have an artificial limb. If I understand what Andy has said right, Regnerus’s study is like if somebody had uncritically plowed ahead with using the artificial limb question as a measure of disability.


      16. This general methodological exchange about surveys is great. It seems like it could be excellent lecture fodder in a methods class. It is especially helpful to think about comparing the size of the response error with the frequency of the response or (I presume) with the magnitude of the effect.

        I work with aggregate data from government reports. If you cross-check carefully, you can find obvious mistakes in that, too. Much of that data begins as someone filling out a paper form–lots of potential for error there, similar to sober respondents who just get confused and make a mistake in their answers. And then there are the deeper factors of shifting definitions which occur when bureaucrats have motivations to make the numbers go one way or another.

        Whenever I lecture about racial patterns in drug use (White kids use illegal drugs at much higher rates than Black kids), my White students suddenly start trying to remember their methods lessons and come up with all sorts of response error hypotheses for a pattern that contradicts popular stereotypes. They typically insist that high school students routinely inflate their self-reported drug use just for the heck of it. I’m sure kids do this. The question is whether this can explain the race difference. That involves a series of other reality checks with other sources of information which lead me to conclude that the reported race difference is real despite undoubted levels of response error for any question directed at teens.


      17. The obvious incorrect answers in the data don’t undermine the data’s overall validity, as Jeremy and others are pointing out, both because some amount of false answers is endemic to the survey form and because the point of large samples is that they don’t privilege any one or few data points. However, the question of whether these poor responses are nonrandomly distributed is important, and Jeremy’s point once again matters a lot: finding a very rare outcome in data where there is even a small nonrandom influence on misreporting that outcome certainly undermines the validity of interpreting the significance of that outcome.

        To extend Jeremy’s analogy on the artificial limbs: Regnerus’s study is like plowing ahead with the artificial-limb question as valid and using it as a measure of disabled veteran status, since, hey, a lot of people with artificial limbs are probably veterans and it’s hard to find disabled veterans in a national random sample.


      18. But the Add-Health artificial limb crowd is still smaller than the, “skip-school without an excuse nearly every day” population who filled out the in-school questionnaire. For the questions I know best, I think the vast majority of people who said in a national survey that they went to any specific political demonstration did not go. Studying rare populations in national surveys is hard, especially when one answer is more fun than the other.

        It’s my understanding that these sorts of problems—identifying rare populations with survey data where measurement error is likely to severely bias estimates—are well-known among those who study LGBT demographics, with both the William’s Institute and Census Bureau producing research reports on the issue.

        For example, here’s a quote from the “Measurement Error Issues” section of a 2009 report, “Best Practices for Asking Questions about Sexual Orientation on Surveys,”:

        “Given that sexual minorities often represent less than 5% of the population, the false positive problem should always be considered in working with such data. Researchers should be vigilant in considering the degree to which errors in the larger population could yield to misclassification into the smaller population.”


      19. Reminds me of Phil Ochs’ back-of-the-envelope treatment of participation in the 1968 Chicago riots:

        Oh, where were you in Chicago?
        You know I didn’t see you there
        I didn’t see them crack your head or breathe the tear gas air
        Oh, where were you in Chicago?
        When the fight was being fought
        Oh, where were you in Chicago?
        ‘Cause I was in Detroit.


      20. Just to clarify, what I think is pedagogically valuable about this debate is the way it disentagles different issues. So the original methodological critique stands on two grounds: (1) interpreting “has your parent ever to your knowledge had a same-sex affair” as “raised by same-sex parents” and (2) likely response error is greater than the prevalence of the trait. But critiquing a data set simply because there are response errors isn’t appropriate as all data contains some error. The magnitude of response error needs to be compared to the response error rates of other surveys. I can really see this logical step through being of great value to students.


      21. OMG! I have been so naive. Some people must have really strong opinions about sample weighting procedures, I thought. Or maybe they are just really turned off by the idea of companies that are involved with market research.

        And, then today, out of the blue, it hit me: “I wonder if BRAD WILCOX has ever done a study using Knowledge Networks.” And, lo: yes!


  5. Thank you for this, Andy. Very well done.

    But I want to go a little further, to the issue of tainted reviews. If it’s true that two of the reviewers had direct ties to the study itself, what about indirect ties? This is dicey, but important. We need to work on our disclosure and transparency standards.

    The funders, Templeton, Bradley, Witherspoon, have a solid network of academics whose research they fund. The publications that result are used to obtain and legitimate high prestige tenured jobs. Etc. (See: Domhoff, _Who Rules America?_)

    Yes, I know and respect some people who have gotten this money, and believe their work is not corrupted by the funding sources.

    BUT because the foundations don’t operate with the same accountability that government agencies do, we don’t know how it works. In that Q&A you cite in which Regnerus waves off applying for federal funding, he says, “in informal conversation about it, Witherspoon expressed openness to funding it.” Who was at this cocktail party at Princeton? I’m not saying I really need to know, but if you were, and you were asked to review the research for SSR, should you have disclosed that you were?

    That defense of Regnerus you link to, at least half of the 18 signers have obvious material ties to Witherspoon, Templeton, Bradley, or related foundations (e.g., Lousiville Institute and Lilly Endowment) — but most were not listed on this project’s letterhead. Should they have been reviewers on this article?

    Where do we draw the line, and how should Wright or other editors? For example, I have gotten a little money from Russell Sage (and asked for a lot more), and from John Edwards’s anti-poverty thing at UNC — and then I have gone on NPR and promoted an expansion of the welfare state. Should I disclose that (it’s all on my CV) if I am asked to review an article on poverty? Etc.


    1. The question of funding is very thorny, and I tend to come down on a far more permissive side than many of my colleagues do. Basically: I don’t believe there is such a thing as tainted money, only tainted terms. Years ago, I led a group of UNC faculty who challenged the Chancellor to reject money from the Pope Foundation for a Western Civilization curriculum, and then as now I was adamant that the problem was not the funder but the strings attached to the funding. In my ideal world, universities/departments would claim a tax on all outside support (as they generally do on federal money already) and set aside that tax to support research that is less popular with wealthy funders, thereby reducing the inequality inherent in differential ability among scholars to woo outside funding.

      Yes, there is such inequality, and that inequality is unfair. As Darren Sherkat writes below, it is partially political inequality, that is, big grants available from conservative funders. It is also scientific inequality — for example, much more federal money available for studies of international health than sociological theory. These inequalities are too bad but the necessary reality of today’s academic world.

      Disclosure of funders is important–both for authors and reviewers–but I do think Wright and other editors should be particularly cautious when having a study this controversial reviewed, and it sure doesn’t look like he did so in this case.


      1. One thing I realize we should make explicit now is: When there are tight research networks with lots in common in terms of funding, co-authorship, etc., it is a good idea for editors to get reviewers from OUTSIDE the niche. Sacrifice some expertise for some distance, in other words.

        In this case the gaping holes in the paper are readily apparent to people without specialized knowledge of the parenting-and-family-structure literature. If the private foundations are going to hide behind non-transparent processes, we need to take independent steps to try to keep the research honest.


  6. Yeah, Phillip, how much did Russell Sage give you!!! HA! The false equivalency issue is directly made in the Regnerus paper “yes, I’m funded by far right-wing foundations, but so are “they” only “they” aren’t. Can you name me an early-career, left wing, quantitative family person who has a private grant of nearly $1 million? I’d love to know, seriously. I need to know, so fess up you liberal-funded commies!


      1. That’s about what Lisa Keister and I managed to beg from Russell Sage for a conference. Not exactly like someone dumping $795k on you for a cover letter and the secret Christianist handshake….


  7. There’s an amusing, but ultimately unsatisfying, discussion between Saletan and Regnerus on Slate, beginning here and continuing for several back-and-forth articles.

    I say ultimately unsatisfying because Saletan doesn’t actually ask the hard questions, like why the fictional categories were created, why the article chose to use the ever-had-a-same-sex-partner question instead of actually paying attention to the time-diary data, or why Regnerus consistently uses causal language to describe emphatically non-causal results. This, in turn, allows Regnerus to maintain a smug, self-aggrandizing tone claiming scientific disinterest in the face of overwhelming counter-evidence.


  8. Dear Dr. Wright

    I am a regular reader of your blog at Black, White, and Gray.

    Since you are not allowing any discussion of your recent entry, “In Appreciation of Mark Regnerus,” I thought it might be worth writing you directly and posting on scatterplot. It’s not particularly surprising that we disagree on the merits of the article, which I’ve written about extensively on scatterplot. But one thing you wrote is not just a disagreement but factually untrue, and tellingly so. You write that Regnerus “conducted a study to test the association between same sex parents and outcomes….” This is incorrect. There were virtually no children of same-sex parents in the sample, as has been amply documented.

    The study, at most, tests the association between parents whose children remember their ever having had a same sex romantic relationship and those children’s outcomes. The fact that you — and Mark — conflate this into “same-sex parents” is illegitimate.

    I note, as well, that your entry provides no actual references or links to the critics you claim are bullying, anti-intellectual, or take a “data be damned” attitude. Given the enormous amount of information now available about the rigged character of the study, the enormous analytic errors, and the willful misrepresentation of the study and its findings to an eager audience, I would think you might want to dismount your moral high-horse.

    Best wishes,
    Andy Perrin


  9. Followers of this topic may be interested in the measured and considered thoughts of the country’s premier sociologist of religion, pasted below from email.

    I love this witch hunt. Highly selective going after only highly specific parts of all the crap in our discipline on ideological/politics grounds, it’s awesome. I say let’s ramp up a progressive Inquisition, take out some people’s careers, in fact. Doesn’t matter that tons of what passes for social science is actually shit. Targeted attacks can effectively produce the uniformity we’re after. Shit that supports our political interests is okay, of course. Still, all the push-back has to be done under the banner of “good science,” rather than the political issues really driving this, otherwise we might look bad. But we can of course count on our colleagues in the discipline to ignore the political reality in all of this and to buy the “science” angle, given the knee-jerk reactions they tend to have to the “right” views. This’ll teach those damn right-wingers to come up with the “wrong” findings, however much their methods actually improve on what was done on the issue before. Our cause is so damn right, we’re justified in whatever has to be done to throw these people under the bus. But let’s not talk about it so overtly and frankly, otherwise we might feel bad.

    Still, I’m so inspired, I even wrote a poem to commemorate the occasion:

    Chris Smith © 2012

    Many ways there are, indeed, to hold an inquisition.
    The rooting out of heresy takes on a new position.
    For different doctrines, of course, the heretic is blamed,
    But whether wearing robes or ties the purpose is the same.

    How far we’ve come in modern days beyond the rack and tongs,
    To purify communities confusing right and wrong.
    O’r different doctrines, of course, recanting is required,
    And when repentance fails to come, it’s time to light the fire.

    In every case the obstinate in flames takes their own stand.
    The righteous inquisitioners, though, have no blood on their hands.
    In fact, the very thought of inquisition would appall.
    So natural the process no one notices it at all.

    Horrendous were the days when priests imposed their orthodoxies.
    More difficult to see the ways that some become their proxies.
    For in the end few groups can tolerate dissenting minds,
    And so, despite Enlightenment, they hunt and purge in kind.

    Good Conrad and fair Golding, where have your lessons gone?
    Invisible the mur’dring heart that follows our tracks home.
    The hands equality and freedom ever do inspire,
    Know not the cause for being singed (from having lit the fire).


    1. I appreciate the poem (though I find it best to cleanse the palate between preamble and verse).

      Still, although I’m no expert on the Inquisition, I think comparing it to a critical blog post lets it off a little easy.


      1. A critical blog post is one thing, academic disciplinary proceedings are another. That is to say, it’s one thing for Andy or Phil to criticize Regnerus and another for Rose to (successfully) demand that UT investigate him.

        I appreciate that both Phil and Andy distanced themselves from Rose on this and wish that more of Regnerus’s critics would do the same.


    2. Good people all, of every sort,
      Give ear unto my song;
      And if you find it wondrous short,
      It cannot hold you long.

      In Islington there was a man,
      Of whom the world might say
      That still a godly race he ran,
      Whene’er he went to pray.

      A kind and gentle heart he had,
      To comfort friends and foes;
      The naked every day he clad,
      When he put on his clothes.

      And in that town a dog was found,
      As many dogs there be,
      Both mongrel, puppy, whelp and hound,
      And curs of low degree.

      This dog and man at first were friends;
      But when a pique began,
      The dog, to gain some private ends,
      Went mad and bit the man.

      Around from all the neighbouring streets
      The wondering neighbours ran,
      And swore the dog had lost his wits,
      To bite so good a man.

      The wound it seemed both sore and sad
      To every Christian eye;
      And while they swore the dog was mad,
      They swore the man would die.

      But soon a wonder came to light,
      That showed the rogues they lied:
      The man recovered of the bite,
      The dog it was that died.


    3. Wow, persecution complex much?

      Sometimes a cigar is just a cigar, and sometimes bad science is just bad science. Most bad research doesn’t get criticized loudly and publicly because nobody reads or cares about it except perhaps a handful of academics in related areas. This research was targeted for criticism because it was getting media and public attention due to a concerted effort by individuals associated with the research. I suppose those who think the research is deeply flawed should just sit by quietly and let them have their say? After all, Regenerus has studiously implored his critics to believe he had no political agenda in publishing this research… If we’re to believe him, why shouldn’t his critics be believed? If we’re not to believe him, perhaps he should be the first to speak “overtly and frankly.”

      Liked by 1 person

      1. Yes, Wright’s comment is spot on, and thanks for the link. I seriously doubt that anything like scientific misconduct happened here. But I, Philip, and others have provided sound reasons to doubt the findings and suspect the process. These don’t disappear because somebody cries “inquisition!” Regnerus made the question partisan with his very first article in Slate, and then Smith rushes in to charge the substantive critics with a political witch hunt? I call BS.


      2. yep… i signed the letter denouncing the study. but i’m with EOW about the fact that i don’t think it was misconduct. i think regnerus was sloppy, and i think political aims and (more likely) opportunism drove his work. but if i’m being honest i also think that a lot of work would meet those conditions (political commitment to a finding, and interest in finding it). that said, i also believe in our enterprise… like erik i think that better research will show regnerus’ findings to be as sloppy as we all now they are.


    4. By my count, prominent and powerful sociologists have now accused us of:

      -being Tea Party conservatives
      -performing a witch hunt
      -holding a (capital I) Inquisition

      for observing and writing down uncomfortable truths about the sociologists around us. Perhaps strangely, I feel unharmed by this. Perhaps that is because writing up a note is not really harmful.

      Please note that no one on this blog, nor in the letter I signed with 200 others, called for any firings or resignations. To pretend that we have offensively removes attention from the requests that were made:

      “We urge you to publicly disclose the reasons for both the expedited peer review process of this clearly controversial paper and the choice of commentators invited to submit critiques. We further request that you invite scholars with specific expertise in LGBT parenting issues to submit a detailed critique of the paper and accompanying commentaries for publication in the next issue of the journal.”

      These are reasonable requests. Why would Christian Smith use his substantial status and influence in sociology of religion to encourage the journal to respond to these requests rather than call it a witch hunt?


    5. Let’s take this step by step. 200 scholars signed a letter whose complaint was primarily lodged against the journal for failing to follow normal peer review process and proposing as a remedy that other scholars be allowed to publish criticisms of the article. The possible misconduct in this letter refers to the journal editor. This letter alleges no misconduct by the researcher beyond incompetence. Obviously well within the bounds of appropriate scientific scrutiny. Not a witch hunt.

      Bloggers like Phil additionally allege motivated incompetence (ignoring data that would have run counter to the PIs agenda), tainted funding sources, timetables tied to political rather than scientific cycles, and fraudulent methodological reports by the PI. This is more personal as it alleges bad motives and skirts into claims of intentional misconduct. This kind of take-down would definitely disrupt good working relationships among colleagues in the same department. He does not call for an attack on the employability of the target, but if these kinds of reports had been written about fast-track analyses running in another political direction, I can easily imagine defenders of the researcher feeling that the attack itself was excessively personal and politically motivated.

      Then there is the call for a misconduct investigation at UT. That certainly is an assault on Regnerus’s livelihood and it seems to me that a mild statement like EOW’s, while correct, may not go far enough. Distancing yourself from the investigation isn’t the same as saying that it is wrong and goes too far. Failing to speak up aggressively against this is perhaps not unfairly viewed as at least tacitly supporting it. I say this as someone who has not spoken up. I’m just trying to think this through dispassionately. Perhaps Chris Smith’s anger is appropriate in this context.

      And I think accusing him of being a “high status” person who is somehow attacking “lowly” bloggers is entirely disingenuous. At least some of the bloggers on this page have plenty of status.


      1. OW,

        Well said. I’m thinking perhaps a petition to the UT that would be CC’d to Rose, IHE, and CHE. The petition would reject academic discipline but acknowledges disagreement on the study itself. That is to say, something that could be affirmed by both the scatterplot folks and the Baylor letter folks. Here’s a first draft of language but I’m open to any revisions:

        “We the undersigned reject any attempt to subject Mark Regnerus to formal academic disciplinary proceedings over his 2012 article published in Social Science Research. Although we disagree about the merits of the study itself, we agree that Regnerus is not guilty of scientific misconduct and find it disturbing that a non-academic critic has demanded a disciplinary hearing from the University of Texas. Whatever flaws the study may have should be adjudicated through the usual means of scholarly criticism, comment and reply articles, and most of all through additional studies by other researchers. We the undersigned have taken and will take different positions on the merits and methods of the Regnerus study, but we all agree that inquiry into controversial research questions should be intellectual endeavors not subject to the chilling effect of disciplinary hearings in the absence of evidence of clear ethical violations like fabricated data or abuse of research subjects.”


      2. Responding to Gabriel: I think I personally would be more happy signing a letter that was only from critics, that says we do have serious problems with this and feel that airing those problems is important, but the place to do this is in the arena of public and scientific debate, not administrative proceedings. I would not want to be party to a letter that blunted the force of the original critique, even as it draws a line about opposing administrative reviews of anything other than actual fraud.

        I said that I could understand why Regnerus’s friends would feel that he is under attack, because he is. Further, the point of this attack is to have a chilling effect on similar behavior in the future. I support this. I do think the response at the level of vigorous verbal dissection of his actions is appropriate because politicized studies of this type have victims, just as The Bell Curve had/has victims. Further, this study got a fast track into a journal via connections, further justifying the perception of a more coordinated political action, not just run-of-the-mill sloppy or politicized research reporting.

        But the same kind of politicized over-generalization of a study happened not too long ago around the idea that Wall Street bankers are psychopaths (see, e.g. which cited and , a citation to which was later corrected in the NYT). We live in a world already in which some scholars have been attacked and lost their jobs for excessive criticism of powers that be.

        So debates about who deserves spirited public criticism are, themselves, public debates, and we live in a world in which one side’s vigorous airing of debate is another side’s unwarranted attack.But seeing both sides does not, in my mind, mean that I should not take a side about which things deserve critique.

        The danger of bringing administrative sanctions into the fight is not only that they may get turned against your own side, but that they risk blunting the edge of critique, if you have to worry that your criticism (no matter how justified) might provoke some bureaucrat into going after someone’s job.

        So I’d edit your text to say:
        “We the undersigned reject any attempt to subject Mark Regnerus to formal academic disciplinary proceedings over his 2012 article published in Social Science Research. Although we are sharply critical of the merits of the study itself, we find it disturbing that a non-academic critic has demanded a disciplinary hearing from the University of Texas. Whatever flaws the study may have should be adjudicated through the usual means of scholarly criticism, comment and reply articles, and most of all through additional studies by other researchers. Inquiry into controversial research questions and vigorous criticism of others’ research on controversial questions should be intellectual endeavors not subject to the chilling effect of disciplinary hearings in the absence of evidence of clear ethical violations like fabricated data or abuse of research subjects. ”


      3. hi OW,

        I was thinking that an agnostic letter would allow people to sign without having formed a firm opinion on the study (as I myself have not) and in particular had in mind trying to find wording that EOW would feel comfortable signing given the paraphrased portion of his response in IHE in which he disclaimed sufficient knowledge to make a substantive opinion. That said, I agree with you that the really important thing is to allow the critics of the study a forum to oppose Rose’s excesses, which would rebound both to their own credibility and to the principle of open inquiry. I further agree that it would be a shame were some of the critics to refrain from signing a petition against the UT hearing out of discomfort with language that was agnostic on the substance and in this sense your wording has the advantage.


      4. I would not sign such a statement, and here’s why. I am unable to evaluate whether actual scientific misconduct occurred. At least one attentive reader believes it did, and UT has a process in place for investigating this internally. In the unlikely event that Regnerus did commit scientific misconduct, I do not oppose institutional sanctions for that. My bet is that the inquiry will find nothing rising to that level. But an interested reader filing a complaint is not inappropriate if he honestly believes — as he appears to — that misconduct occurred. I have no basis for such a belief so have not joined the complaint, but I have no reason to assume that the inquiry itself is illegitimate.

        Similarly on OW’s claim that Philip’s posts constitute inappropriate personal attacks: each of his posts contains specific evidence. It is not implausible that Regnerus acted in bad faith, so it can’t be inappropriate to present evidence to that effect.


      5. Andrew: I appreciate your opinion. But a clarification. I do NOT think Phil’s posts were in inappropriate. I thought they were factual and on point. I learned a lot from them. What I said (or at least was trying to say) is that if you were the attackee or a friend of the attackee, you would probably view them that way. I used the phrase “I can easily imagine” — I did not say that was my own view. I was trying to develop a (to me) plausible alternate view of the posts.

        There are many people who mute the public airing of flaws in others’ research, who feel that doing so is the way to maintain good working relations, and others who feel that scathing public critiques are scholarly and important. You see this difference in the standards for discussion at meetings. This is a normative dispute in the discipline. I’ve heard many a debate that was exactly about the standards for public critique.


      6. Thank you, OW. I mostly said the things I did because of his statements and writings outside of the study itself — in the press release and articles and interviews he gave. Politics begets politics, so that is fair game. (I am also not bothered by Chris’s complaints, for the same reason.)

        I also would not sign a statement that expressed confidence that Regnerus has not committed scientific misconduct – although Rose’s complaint does not convince me that he has. I am not so sure it is a bad thing for a public university to have a process for hearing complaints from the public. It looks like Texas has issued a pro forma response to the complaint, which is not a rush to judgment or witchhunt. That said, on the scientific merits I agree that research and reanalysis are the appropriate responses.


      7. I haven’t spoke up against Scott Rose out of a combination of (1) distraction, (2) spinelessness, and (3) sense that this wasn’t my fight, given how bothered I am by the media attention the Regnerus study received despite basic flaws. But, obviously, sociologists ought to have some expertise (and courage) regarding the standards of what constitutes scientific misconduct for sociologists. While I agree with Andy’s basic principle that one cannot really make some overall judgment that there is no possible scientific misconduct of any sort, Rose’s complaint to UT made specific arguments, and sociologists can certainly articulate a view as to the coherence, consistency, and face validity of those. (Hint: Rose’s arguments about pure-ABS vs. dual-frame vs. snowball sampling are, to put it charitably, debateable, and debateable methodological issues do not constitute scientific misconduct.)


      8. I refuse to take responsibility for Rose’s complaints. The sociologists on this blog and elsewhere have been very specific and clear about our concerns. Please see this very comment thread for a healthy discussion of sampling, data collection, question wording and order. I take responsibility for the letter I signed, the complaints we signatories lodged collectively.

        I respectfully disagree with OW on one point: that Smith’s status isn’t a valid topic. From where I sit, it seems quite clear that Christian Smith’s ability to be dismissive of our complaints without addressing any of them, to throw out the wildest of comparisons–witch hunts and the Inquisition–and to casually lump the scholarly concerns of the bloggers with whatever administrative review the UT is conducting, is born of his unique position in the field. This is true regardless of the high status of some bloggers.


      9. Tina, I keep looking back at the quoted material from Chris Smith. I don’t see the part where he cites Scatterplot. Could you point me to it? Or, if I’m right that he isn’t citing Scatterplot but perhaps has Rose in mind, or even Phil (whom I hope I have clarified I actually agree with, but certainly is pretty aggressive in his prose) et al., why are you so outraged that he would be upset? This whole thing of “it’s ok for me and my side to be outraged but your outrage is unreasonable” bothers me, even though I am, in real life, a raging partisan who is perfectly happy screaming about “evil” politicans and whatnot.

        Ok the Inquisition is over the top.

        But when somebody attacks a left-wing academic or program around me (which happens pretty often in my experience), I personally think about accounts of the harassment of Jewish and left-wing academics under Hitler. I try not to bring it up, because bringing up all the associations with genocide just cheapens the point, but I do think about it.

        Chris sent out an email that obviously went to a wider distribution list than it should have if it is getting copy/pasted into Scatterplot. But is it any more vigorous than something I would have written and (I hope) have had the sense not to put in an email to someone I did not 100% trust (and even then, probably not because my email is a public record)? Probably not. Well ok I wouldn’t write poetry, but you get the point.

        Full disclosure. I know Chris, I like him, although I guess I don’t agree with him on this issue. But anytime somebody is a person to you instead of an abstraction, you talk about them differently. And isn’t that the whole point in the first place? Or a lot of it?


      10. The email I received from Chris, which is what I pasted, was sent to me, Philip Cohen, Jessica Collett, and Dan Myers. The subject line was “bad science not about same-sex parenting,” which makes it clear he’s talking about this blog post.

        I, too, know Chris (was colleagues with him for several years) and I, too, like and respect him. But his behavior in this case is petty and infantile.


  10. Just wanted to point out that you can add “The Left’s War on Science” to the “progressive Inquisition.” George Yancy has now taken to saying that “the size of the blowback that [Regenrus] received shows that these individuals are not looking for better studies but want to push their own political agenda through vicious attacks.” He notably also calls this a “witch hunt” and lumps all the critics together with the University investigation and the claims of misconduct, etc.

    See –


  11. Just wanted to point out that you can add “The Left’s War on Science” to the “progressive Inquisition.” George Yancy has now taken to saying that “the size of the blowback that [Regnerus] received shows that these individuals are not looking for better studies but want to push their own political agenda through vicious attacks.” He notably also calls this a “witch hunt” and lumps all the critics together with the University investigation and the claims of misconduct, etc.

    See –


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: