Three books on heterodox quant: Bayes, econometrics, and demography

One of my resolutions this year is to engage in a long-term effort to upgrade my quantitative skills, as I’m a mid-career historian with some quantitative skills, of the type that inherently grow stale (along with my patience with SAS). I thought I’d share my thoughts about a few books, two of which I’m in the middle of reading. Each is geared towards a different approach to quantitative methods that might best fall under the “heterodox” label because each is different from the most common training in statistics.

Bayesian methods: John Kruschke’s Doing Bayesian Data Analysis (2010). Kruschke’s text is the friendliest I’ve found for explaining modern Bayesian methods, at least for someone with my background (comfortable with reading stats material at a practical level, not primarily focused on proofs). Chapters 7 and 8 are almost worth the price of the book in themselves for justifying/explaining Markov-chain Monte Carlo methods–this is the engine of analysis for Bayesian statistics today. And as a text, it’s remarkably reasonable in price (though it is sad that an $80 stats text is considered low-budget).1 One caveat: Chapter 11 is much too aggressive in arguing against a frequentist approach (the standard that most students learn in stats classes, with hypothesis testing and p values), with two exceptions.2 But except for chapter 11 and a few practical glitches (fewer than in most texts), I am finding this text very useful as an introduction to Bayesian analysis.

Econometrics: Joshua Angrist and Jörn-Steffen Pischke’s Mostly Harmless Econometrics. (2008). This book focuses on some of the more common causally-oriented methods of empirical microeconomists, including instrumental-variable analysis, propensity-score matching, and regression-discontinuity designs. It does not include discussions of panel data, time series, and several other pieces of the econometrician’s toolbox,3 but it is an extraordinarily useful (if occasionally dense) peek into the minds of econometric cowboys, the sort of people who think of birth month, vending machine locations, and nursing home densities as instrumental variables for something.4 Angrist and Pischke are unabashed advocates of simple regression and if you can wade through the expected-value semi-proofs sprinkled in each chapter, you will understand clearly why they take the positions they do. See Andrew Gelman for a statistician’s review of the book.

Demography: Sam Preston, Patrick Heuveline, and Michel Gillot’s Demography: Measuring and Modeling Population Processes (2000). Demography is an applied slice of social sciences whose fundamental analytical tool is the transition event: birth, death, etc. Each event happens in a population exposed to the event, a concept that is at the root of a demographer’s perspective. As a result, demography’s focus on exposure as a fundamental unit makes its analysis much closer to epidemiology and engineering’s time-to-failure analysis than to the rest of social sciences. If you’ve ever seen statistical survival analysis, that’s the type of thinking involved in demographic analysis. The Preston, Heuveline, and Gillot text is a general introduction to the concepts used in demography, including life tables, indirect methods of estimation, and some more abstract models that demographers use to think about population change. Disclosure: I had Sam for several courses in grad school, and he roped me into the program at Penn for a masters; I saw a few draft chapters of this text in his courses many years before the publication of the text.

There many other practical quantitative methods that are not included here, and I suppose that’s what makes life interesting. Nominations for other stuff to read in quantitative methods?

If you enjoyed this post, please consider subscribing to the RSS feed to have future articles delivered to your feed reader.
m4s0n501

Notes

  1. Since I’ve wanted to learn R to wean myself off SAS, the text’s use of R fit one of my other goals, but that’s just a bonus from my perspective. []
  2. One exception in Chapter 11 is multiple comparisons. A common error of graduate students is to use their “I’m trained in frequentist approaches” background, make multiple comparisons, and forget to use a Bonferroni or similar adjustment to account for the fact that random error would generate statistically-significant results if you have enough comparisons. With a Bayesian approach, that is not a conceptual error. Similarly, a Bayesian approach allows post-facto investigation without guilt. []
  3. I was wrong on the omission of panel data; it’s not a chapter in itself but does appear in a few sections. []
  4. Disclosure: at no point in the book did Angrist and Pischke use vending machines or nursing homes as instrumental variables, but I’m sure they could do something with them. []

2 responses to “Three books on heterodox quant: Bayes, econometrics, and demography”

  1. Glen S. McGhee

    Is this where Angrist lays out his Vietnam veteran study that is making the rounds?