Yoda causality: Do, or do not. There is no why.

A few weeks ago there was a minor flamewar on Twitter about the alleged abuse of the term causality in various social-science fields, and after the destruction of a small galaxy or two, it ran out of fuel. But this semester I directed an undergraduate honors thesis that used a difference-in-difference approach to a policy question, and I want to put a stake in the ground about the term causality. This is to avoid the future destruction of small galaxies and promote more peaceful gatherings of social scientists.

Here’s the gist: econometric techniques that clearly identify causal relationships target a very specific type of causality, what we might term sufficient-conditions effects in the sense that they identify sufficient causes (if you do X, you will see Y). This sufficient-conditions effects is different from necessary-and-sufficient causes (only doing X will result in Y, and nothing else will budge Y without doing X). But because sufficient-conditions effects is an awful phrase, I propose the following:

Yoda causality: do, or do not. There is no why.

And now the gory details:

For those who don’t follow the econometrician and statistical herds on their annual migrations, Donald Rubin and others have crafted both a language and a set of techniques to tease out sufficient-conditions effects. Each study in this tradition probes whether X is a sufficient condition for an observed (alleged) effect Y, and the magnitude and direction of that effect Y. Apart from truly randomized experiments, there are instrumental variables, difference-in-differences approaches, regression discontinuity, and others — about which you can read more in Angrist and Pischke’s Measuring Metrics (2014) or Mostly Harmless Econometrics (2009), for those who prefer a more technical version. These are clever approaches to isolating potential causes from selection biases and other issues that could mask underlying relationships. I read and admire much of this in education policy research.

Because of the cleverness of these approaches, many who have mastered these techniques describe the resulting knowledge as information about causality in general. And moreso, some try to cast shame and particularly ugly shades of glitter on anyone who uses the term cause when there are only cross-sectional regressions or raw two-group posttest comparisons. In the meantime, other social scientists dispute this attempt to claim causality for this set of analytical tools. Thus, the blow-up on Twitter I witnessed.

As an historian, I am more amused than annoyed by the attempt to monopolize a common word such as cause. Good luck trying to coopt a word that general.

The small bit of annoyance is because the attempt to call this toolkit the sum of causality elides the fact that these studies generally use a powerful but narrow frame on research. A particular study can say whether X will cause Y, but it doesn’t explain the mechanism inside that, or what may affect that relationship. By itself, a single study doesn’t compare X with other potential things to do that are reasonable and exclusive competitors. It says what happens if you do X, but not why or what else you could do. In itself, this is often very impressive. At the same time, it is not everything.

Despite this mild annoyance I share with some others with a social-science bent, it’s only mild. Trained as an historian, I worry about the type of causal understanding on the scale of, Do you understand that slavery caused the American Civil War? That’s causal in a way akin to entanglement, in the sense of slavery’s past and future being deeply entangled with everything that led up to, shaped, and came out of the war. But it’s a clear professional use of the word by historians, distinct from econometrics, and equally valid.1

And as far as I’m aware, economists aren’t debating that. None of the economists I know marched in Charlottesville last year with tiki torches in hand. Of all the groups of people who understand causality differently from how I was trained, economists are pretty far from my #1 concern on the topic. To put it bluntly, I am more worried about people who misunderstand historical causality in a way that’s dangerous to the Republic. So, I worry less about attempts by some social-science colleagues to pretend they have purchased the word causality. That’s small potatoes.

Furthermore, there’s a way for everyone to be happy: create a specialized term for the technical meaning that econometricians and their ilk currently use cause for. I’d be tempted to call it Rubin causality or, better yet, Rubinesque causality, but that’s been tried with the Rubin causal model, and that term lost out to Refrigerator Perry in Super Bowl XX. Okay, that’s not true, but Rubin causal model never caught on, so we need something else, something snappy.

So, back to the core of econometricians’ causal inference: a study can say what happens if you do X versus not doing X. It generally treats the path from X to Y as a black box, or at most confirms hypotheses for that relationship. Do, or do not. Not why.

Do, or do not. There is no why.

And that’s where we come up with my proposed specialized term for econometricians, their ilk, and the rest of us: Yoda causality. Do, or do not. There is no why.

Notes

  1. Historians are comfortable with the messiness of causal relationships, or multicausal patterns, and are far twitchier about contingency of events than the word cause. []