Single studies are good; literature is amazing

We can learn quite a bit from the surge of amateur epidemiology: It’s hard to be a good reader of a single study, and you don’t have to do that to learn from research. For almost half a year, I’ve repeatedly seen many well-educated, well-read people try to learn The Secret of Covid from individual studies in fields they have no training in.

This is understandable, but not generally a good use of someone’s time. Nor is it necessary.

My relevant expertise is epidemiology-adjacent: a masters of arts in demography, taught by some of the leading researchers in mortality and fertility. Graduate degree in counting death, in part. This is not the same as epidemiology, and I earned the degree 28 years ago. In terms of understanding papers coming out today, I have a fighting chance of understanding the gist of an article on epidemiological modeling but no experience in original research in the area.

This matters. It means I can follow an argument, but not evaluate it anywhere near an expert in the field.

So when multiple relatives/friends started citing a certain PNAS paper to say it “proved” the value of face masks, I said something like, “Let’s pause and wait a few days for epidemiologists to weigh in.” That process took less than a day — most prominently, in this Twitter thread from Kate Grabowski: https://twitter.com/KateGrabowski/status/1271542361244352514

I’m claiming no epidemiological spidey-sense here. Instead, it’s considerable experience in not putting too much weight on a single study.

At my college, we teach masters students how to read individual studies in empirical social-science traditions. It’s a good course! And it’s a more formalized version of what I had to learn when I moved into colleges of education – I mean, what’s the fun in trying to silo oneself off as an historian of education when there’s all this brain candy going on around you? In my postdoc, it was a bit of a drinking from a firehose experience.

At this point I have more experience and facility with reading empirical education research than I ever had in demography. I’m pretty good at spotting strengths/weaknesses, and will probably catch a majority of major issues in manuscripts regardless of field. Probably.

And unless you have more experience than I do in a specialized field, or are sharper than me (and you probably are! But still), you’d need to spend a lot of time with each article or white paper to evaluate it.

There are hundreds of papers on Covid coming out each day, and many of them are in languages you don’t read or speak.

So it may be intellectually stimulating to ask, “What can I learn about X (Covid, whatever) from this individual paper?” But outside your expertise, you’re going to drown before too long. A question you can answer, with a little more efficiency, is, “What can I learn about X from a chunk of studies?”

It is one thing for a single study to claim that putting on sunblock will help prevent the next pandemic. It’s when a barrel of studies does so that it matters. No, don’t put on sunblock to stop a disease! That was a joke! Yeesh…. I hope you get the point.

Even when you see a review or synthesis of research, there is still the question of study quality. But the nice thing about trying to look for multiple studies is that you don’t have to be an expert, in general. In an area of active study, the researchers in the field critique each other. And you can see much of it. Often that debate takes time. Many years ago, John Hattie started to gather every single meta-analysis (systematic synthesis) in a whole bunch of areas of education research: the visible learning project. What we know now is that he collected everything he could, good and bad.

The collection of every study you can find gives one the rough average of hundreds of hundreds of studies. It’s like the average taste you get when you mix champagne with swamp water.

We know now that you probably want to filter out the swamp water (or its equivalent in research) to keep the champagne (or at least decent wine) — that is, set up some good inclusion/exclusion rules in a meta-analysis.

That internal discussion takes time, and I know it’s frustrating. But with Covid, we’re seeing all of it compressed. The public critique of a PNAS article in less than 24 hours? Amazing.

So… hold off helping that single study go viral outside your area of expertise. Instead, follow people who explain what they learn from multiple studies. Right now, in Covid, it’s going to be pretty rough, and people are going to make errors. As long as you can see the discussion as people wrestle with it, you’re at least following what experts say.