guest post: black boxes and wishful intelligibility

Here’s the one-sentence version of this post: Black-boxing is good, actually. 

(The longer version is a summary of my recent paper, “Wishful Intelligibility, Black Boxes, and Epidemiological Explanation,” just out in Philosophy of Science.) 

Black box explanations get a bad rap: they are opaque, often the result of statistical (rather than canonically “experimental”) causal inference, and self-consciously, well, not the whole truth. Probably because of this, philosophers of science often take for granted the idea that it’s a good thing to “fill in” a black box explanation with more causal detail. In particular, lack of mechanistic evidence is sometimes considered a shortcoming of epidemiological explanations, which often rely on sophisticated observational causal inference methods.

One reason people argue that mechanisms are important is that they can help us to think about whether a causal relationship will hold in multiple contexts or populations of interest. Some philosophers of science talk about this in terms of the “stability” of a causal relationship. But depending on what we mean by “mechanistic evidence,” mechanisms can actually mislead us about stability, because they can lead us to both over- and under-shoot our estimations of how stable a causal relationship will be. There are two main reasons for this:

  1. Humans find one mechanism and we think we’ve seen them all. But the world is complex, and it often has multiple pathways from a cause to an effect of interest. When this is the case, looking at just one mechanism can lead us to underestimate how stable a causal relationship will be. For instance, philosopher of science Sean Valles has detailed how fundamental causes like wealth produce health outcomes robustly, by contributing to our access to healthcare, our occupational hazards, our ability to isolate ourselves during a pandemic, our stress, and so forth. Molecular systems also frequently have a robust, many-to-one relationship between multiple causes or pathways and a single effect – gene regulatory networks, for instance. 
  1. In practice, mechanistic causal chains are sometimes put together in a piecemeal fashion, which can lead us to overestimate how stable a causal relationship will be. For instance, if I give a plausible mechanism linking neighborhood attributes to cancer incidence, I might appeal to research done at different scales (census tract, block group, tissue, petri dish), in different organisms (humans, rodents), in different disciplines (social epidemiology, psychology, molecular biology), and so on. Looking at an explanation “filled in” or glassboxed by drawing from several different research contexts makes it harder to know whether each link in a causal chain is stable in the same contexts, because the researchers investigating the various links do things differently. 

Say we’re interested in a black boxed relationship between racism and cancer incidence. A researcher trying to glassbox this explanation might draw on neighborhood effects, like measures of social isolation, and link this with psychosocial stress, HPA axis gene regulation, tissue inflammation, and tumorigenesis – all studied in different contexts.  This is a plausible mechanism, but it’s hard to assess whether each of these links will be stable under the same conditions if some of them are measured in a neighborhood and others are measured only in a model organism, or if “neighborhood” doesn’t have the same spatiotemporal scale in different links. It’s still possible to learn about stability from piecemeal causal inference, but we need to be specific about what that looks like, beyond having good evidence for each individual link in the chain.

In situations like these, we might be better off using a black box to admit we don’t really know much about stability. It can be tempting to use mechanisms to make a causal inference “intelligible,” but sometimes we’re deluding ourselves when we think this. I call this “wishful intelligibility,” because I’m skeptical that mechanistic detail always makes our black box explanations better for what we want to do. Black boxing can help us preserve a modicum of epistemic modesty, and it can help us make sure we have good reasons for our assessments of stability. Precisely in virtue of representing our ignorance and uncertainty, it can help us to keep track of the limits of our causal knowledge. 

Black-boxing can also help avoid some of the risks of wishful intelligibility: interventions that don’t work as well as expected, and explanations that distract us from the social structural causes of health by centering molecular mechanisms, among others. To be clear, I’m not against searching for mechanisms! I’m just pointing out that we often move too quickly from “evidence of a mechanism” or “evidence for each link in a causal chain” to thinking we know about the stability of a causal relationship overall. Both of these things can help us to think about stability, and stability needn’t be the only feature of an explanation that’s important to us. But it’s not helpful to take a universal “more detail is always better” approach. We need to take background knowledge, and values, into account when we’re thinking about what kind of mechanistic detail – and what kind of evidence – actually improves our epistemic situation.

More broadly, we often think that glassboxing, “filling in” black boxes, or increasing the transparency of an explanation – whether it’s epidemiology or AI – is always an improvement. But, as philosophers of science Katie Creel and Kevin Elliott have each pointed out, we need to be thoughtful and specific about what we mean by transparency in various contexts. The feeling of understanding that we get from some forms of transparency, including mechanisms and “explainability,” can be misleading. I’m optimistic that reevaluating black boxes -specifically, taking seriously the idea that they might be better than the available alternatives – might be one way to avoid this. Black-boxing is (sometimes) good. How good depends not on some universal standard but on what else we know about our epistemic situation

Of course, there’s another way to deal with this, which is to have super high standards for “mechanistic evidence,” or to require that a black box be completely filled in. But I think it’s unfair to expect epidemiologists to have this kind of evidence, and in fact, in some cases, we definitely don’t want them to pursue it (e.g. by randomizing which neighborhoods we live in). Evidence in biology, epidemiology, and medicine is always going to be incomplete, so our tools for evaluating explanations should take this seriously. Our optimism about these explanations should be grounded in the work that they can do for us in shaping effective policies, clinical interventions, and understandings of the world. Sometimes, they can do that work better with a black box. 

Many thanks to Jeff for the invitation to discuss this with social scientists and philosophers of social science. I’d love to hear whether folks think these worries extend to other social sciences, and if so, what we ought to do about them.

Marina DiMarco is a Ph.D. candidate in the Department of History and Philosophy of Science at the University of Pittsburgh. Her primary research areas are feminist philosophy of science, philosophy of biology, and philosophy of medicine. She is also a member of the GenderSci Lab at Harvard. More research at www.mdimarco.com and @MarinaRDiMarco.

Author: Jeffrey Lockhart

Jeff is a postdoctoral fellow at the University of Chicago. He tweets @jw_lockhart.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: