Of all of the issues brought up by the Lacour controversy, we have not devoted enough attention to one in my view. The YaleColumbia* IRB made itself part of this problem.
In his initial comments to Retraction Watch, Lacour’s coauthor and Columbia political science professor Donal Green wrote,
Given that I did not have IRB approval for the study from my home institution, I took care not to analyze any primary data – the datafiles that I analyzed were the same replication datasets that Michael LaCour posted to his website. Looking back, the failure to verify the original Qualtrics data was a serious mistake.
This points to a real cost imposed by intransigent IRBs that become significant hurdles for research to progress. As institutions evaluate their response to this affair, and we reevaluate our own approaches to collaboration, those efforts would not be complete without considering the fact that IRBs hinder good, ethical research.
The subtext of Green’s comments boil down to the idea that obtaining IRB approval for collaboration with a junior colleague were sufficiently high that he didn’t feel it was worth his investment to do so.
But given how bloated most IRBs have become, and the onerous tasks required for various reasons — intrusion on academic freedom, board members not knowing particular research methods, mission creep, the desire to give scholarly rather than ethical advice — researchers now find ways of avoiding going through the process. The real-life ethical harm caused by potentially fraudulent research being accepted as true scientific findings, however, were worse than the potential harm to “respondents” that the IRB would have taken it upon itself to review.
In other words, we often fail to account for the opportunity costs imposed by overly restrictive IRB policies.
One can criticize Green for avoiding the IRB. In fact, in a private conversation (well, on a Facebook group), a friend, who is a mid-career political scientist with an outstanding reputation, did just that. She put a decent size of the blame at Green’s feet for this incident. But given the intransigence of IRBs and the ridiculous lengths to which they go, I can imagine doing the same thing that Green did to stay in the “letter of law,” if even that meant violating the spirit in some sense.
Institutions and professional organizations (oh hai ASA) should seriously seek to reform their IRBs. The AAUP has some recommendations, including eliminating many social scientific methods from IRB review. If institutions do not want to take the full step that the AAUP recommends of eliminating review for several social science methods, I would at least like to see institutions separate medical review from social science review boards. Being at an institution without a medical school, I appreciate the difference in the IRB review process here. On the one hand, it responds promptly and understands social scientific methods. On the other, or because of those reasons, it has caught hidden but real problems and provided useful ethical guidance.
I would ask Columbia: what became a larger liability to your institutional reputation, a stringent IRB review for a study with absolutely minimal risk, or being mired in a retraction scandal caused — in part1 — by following the rule of IRB policy?
Update: I confused the institution of Peter Aronow, the Yale coauthor of the Irregularities report, with that of LaCour coauthor Donald Green (Columbia). I regret impugning Yale’s good name.↩
-
I understand the extent of fraud in this case — if true — would be difficult to detect even if Green had IRB approval. That said, I worry less about outright fraud and more about unintentionally bad science that seeps out because IRBs make collaborating across institutions difficult. I also realize that Lacour has not yet had a chance to respond. But, again, the problem imposed by the IRB still stands, even if Lacour turns out a spirited defense. ↩
Discussion of the role of the Columbia IRB in this case should involve discussion of the requirements that the Columbia IRB would have imposed on a request for permission to analyze raw data that had already been collected by a researcher at UCLA who presumably had UCLA IRB approval to collect the data. The post did use the term “stringent” — and maybe the Columbia IRB requirements would have been too stringent — but this discussion would be improved with a more specific indication of what the Columbia IRB requirements would have been.
LikeLike
L.J., you are right to point out that I don’t know what Columbia would have imposed. I have been part of enough multi-institution collaborations to know that it would not have been beyond the realm of the IRB to require a review even if another institution signed off. I also know that long wait times for IRB review represent the norm at many institutions.
My point is less about Columbia’s specific IRB policies and more about the fact that increasingly stringent regulation of research protocols imposes costs that can backfire on institutions in the very realm — institutional liability — that they seek to avoid.
LikeLiked by 1 person
For me, the requirements don’t have to have been particularly stringent for me to understand why Green would not have had a more direct role in the examination of raw data. I mean, he had access to and apparently did access data that would seem as “raw” as one needed without the hoop, so long as one wasn’t worried that the collaborator had gone along with the conduct of an entirely real intervention but then totally made up the measurement part. So, while one shouldn’t go too far and valorize someone for being the senior author on a paper in Science that likely ends up retracted due to fabrication, I haven’t seen anything here that seems like it could be a failure of due diligence except with benefit of hindsight, and he seems about as forthright as one could possibly want in what has happened since. Crazy situation, though.
LikeLike
The manuscript submission guidelines for Science state: “The senior author from each group is required to have examined the raw data their group has produced.” Maybe there is a reason why that rule would not be applied in this case, but I am not aware of any such reason. Maybe there’s no reason to make a distinction between the raw files and the replication datasets, but Green has made such a distinction (“I took care not to analyze any primary data”).
LikeLike
I am not sure how well that requirement translates to this situation. I think terminology differs across disciplines. The article to which that requirement links states:
The lab model significantly differs from the social science model. Often in the lab model (in which I have some experience being part of a public health research team) the author identified as the “senior author” collected the data or obtained the grant to collect the data. In this case LaCour would be the person who “collected” the data and “obtained the grant” (of course both appear to have been fabricated absent some miraculous defense by LaCour).
LikeLike
Donald Green has claimed senior authorship on the LaCour and Green paper: “The negative way to look at it is here was a failure of the review process, or a failure of the vetting process, or a failure on my part as the senior author.” For what it’s worth, a New York Times story also identified Donald Green as the senior author on the LaCour and Green paper.
—
With regard to the rule, I think that it’s fair to question whether that particular rule was designed for natural science and should not be applied in a social science case.
I’d propose two ways to interpret that rule. The first proposed way is to interpret the rule literally, in which case Green is the senior author and thus should have examined the raw data.
The second proposed way is to examine the purpose of the rule or the mischief that the rule was designed to prevent. The next sentence after the paragraph that you quoted is: “In this way, Science aims to identify a few senior authors who collectively take responsibility for all of the data presented in each published paper.” From what I can tell, the purpose of having a researcher from a team assume responsibility for the data should apply equally to natural science and to social science.
Maybe there’s a third way to interpret the rule in which the senior author in this case did not need to examine the raw data. However, I don’t think that it is pure hindsight bias to claim that part of due diligence for a senior researcher on a Science article is to examine the raw data.
LikeLike
I come to two conclusions on the relative to Green.
1. Green should have exercised more oversight before sending a manuscript to Science given that he was working with a graduate student. If I were Green, at the point that the manuscript was initially accepted, I would have octuple-checked everything before sending it. It would have been prudent to look at the raw data and every data file to have a second set of eyes on the analysis before sending it out (the degree to which he would have been able to identify any problems even given that would still be suspect given that he would have trusted the potentially faked raw data sent to him by his coauthor). I don’t believe that he committed fraud by submitting the data to Science without seeing the raw data, but he likely exercised poor judgement.
2. If I was working with someone who was obviously talented, I could see avoiding the IRB because of the problems that I cite above (note caveat about submitting to Science above). I can sympathize with someone’s reluctance to go through the process of getting IRB approval to look at data rather than just looking at results.
If I understand you correctly, you believe Green deserves a large part of the blame. I grant you that — and you convinced me that he probably deserves more than I initially attributed to him.
That said, I stand by my overall position that IRBs impose sometimes invisible costs on scientific inquiry that can create more trouble for investigators and institutions in the long run.
LikeLiked by 1 person
I’d agree with your point about the IRB process causing more problems than it needs to. But I think that it is better to cite cases in which the IRB denies quality projects or requires an unnecessarily long review, than to cite cases in which a researcher decided to skip the IRB process, such as in the Green case, the Montana mailer case, and the Brisbane bus bias study.
—
I don’t think that I have enough information to attribute a *large* part of the blame to Green for a failure to catch the apparent fraud.
1. I’m not familiar enough with field experiments to know how much alarm should have been caused by the re-interview rates of close to 90%, as Broockman et al. reported. This is where the hindsight bias is relevant, so I should not hold Green responsible for not being more skeptical of the data, without more knowledge of how unusual the re-interview rates were.
However, it is worth noting that Green told Retraction Watch: “I thought they [the initial results] were so astonishing that the findings would only be credible if the study were replicated.” I’m not sure that that astonishment would necessitate contacting Qualtrics, but it does seem odd that an astonishment that influenced the suggestion of a second experiment did not appear to influence any other suggestion after the receipt of a second set of apparently astonishing results.
2. I’m also not familiar enough with grant and incentive payment protocols to determine whether anyone — and, if so, who — should have checked to make sure that the grants and incentive payments that were claimed in the article and the supplemental information were actually received or paid. Maybe it matters that Green is at a different institution than LaCour.
—
I am more comfortable with a critique that revolves around the manuscript submission policies at Science. In addition to the rule discussed already about the senior author examining the raw data, Science has a rule that: “All authors of accepted manuscripts are required to affirm and explain their contribution to the manuscript…”
It seems that a common purpose of those two rules is to help readers assess coauthored articles, so that — if a graduate student and a full professor coauthor an article, and there is no indicated limitation of the coauthor roles — readers are able to interpret the article as reflecting the accumulated reputation of the full professor instead of the nascent reputation of the often-unknown graduate student.
Green described to Retraction Watch his role in the article as follows: “I helped Michael LaCour write up the findings, especially the parts that had to do with the statistical interpretation of the experimental design.” I did not see any indication of such a limited role acknowledged in the Science article or the supplemental information. It’s possible that the discussion of the article might have been different had a note at the end of the article indicated that Green was not able to vouch for the integrity of the data.
So the Science policies suggest two ways that the episode could have been avoided or ended sooner. First, had Green contacted Qualtrics for the raw data, the apparent fraud might have been detected before publication. Second, had Green acknowledged an inability to vouch for the integrity of the data, then Andrew Gelman and other readers who expressed surprise at the strength of the effect might have looked closer at the results.
LikeLike
One can be extremely hands on with data and still not perform the kinds of forensic analyses that would uncover fabricated data. The tests for fraud are really different from the usual kinds of analysis. We don’t test downloaded GSS or Census data for fraud. It is well known that a lot of official criminal justice data is rife with errors, but those are of a very different sort and require very different detection methods than you’d use for detecting fabricated data. Yo would not test for fabricated data unless the question had been raised.
LikeLiked by 1 person
Jeremy and OW, I agree with you both. I do not intend to say that Green would have found the fraud had he gone through the process of submitting an IRB review to obtain the raw data. Indeed, one trusts the intentions and representations of our collaborators. I worry less about outright cases of fraud as this appears to be. I worry more about things like coding errors, incorrectly named files, variable confusion, etc. that come about in the course of research. Eliminating the ability to share data easily removes a check on the course of normal research.
tl;dr: I agree that this is a crazy situation; but I think that it illuminates a real problem for our research.
LikeLike
At my institution, Green would have needed a determination of exemption to analyze the dataset, and probably expedited review (category 7) to look at the raw data (assuming the data had some identifiers in it, because the subjects had to be re-contacted). The two procedures (exemption determination, expedited review) are done by individual IRB staffers or members, and take exactly the same amount of time. So at least here, Green would’ve needed no more onerous review to look at the raw data he didn’t look at than to look at the dataset he did.
LikeLiked by 1 person
I agree that there are major issues with IRBs in terms of over-reaching and being bloated. Great point. This instance of academic dishonesty is not about IRB. It seems that the graduate student was bent on creating a “big” story that would get him attention.
LikeLike