You just read a fascinating article suggesting that drinking a glass of red wine is equivalent to spending an hour at the gym, that morning people are better positioned for success, or that gun control reduces policy shootings. Let’s pretend that instead of immediately posting the article on your favorite social media website (which I’ll admit I’m sometimes guilty of myself), you instead wonder if the scientific methods behind the study are sound and if you can draw conclusions about cause and effect. How do you figure that out?

Unsurprisingly, it can be really hard. Alex Edmans, a Professor of Finance, has a recent excellent blog postabout separating causation and correlation. After seeing lots of (often subtly) flawed research shared on social media, I’ve also been planning to write a guide to separating solid findings from not-so-convincing ones. It was going to be a cool flowchart that you can make your way through, with explanations along the way about why each step matters. But after having it on my “fun” to do list for months, I realized that the only way this flowchart will ever see the light of  day is if I write it as a series of blog posts and thensummarize things in a flowchart. This is part one.

The first question to ask when evaluating a study is whether it is based on an experiment (where researchers manipulated something, either in a laboratory or in the “field”) or is observational (where researchers collected some data). Experiments may be more reliable if done correctly, but they are not panaceas: there are many ways experiments can go wrong and a big issue is whether experimental findings translate to the real world. But we do evaluate experiments slightly differently from observational studies, so this is the first fork in our imaginary flowchart.

Let’s start with observational studies (this will repeat Alex’s post a bit, but I think it’s useful repetition). The first question to ask yourself is whether the researchers used any “quasi-experimental” variation to come to their conclusion. In general, studies that do are more credible than studies that do not. For example, sometimes researchers get lucky and stumble on a seemingly arbitrary rule that separates subjects (firms, individuals, regions) into two or more different groups. Certain scholarships are given to individuals who meet a specific cutoff on a standardized test score. Because it’s very difficult to control your score down to the point, people right below and right above the cutoff should be very similar in ability, except that the ones right below the cutoff did not get a scholarship and those above the cutoff did. Voila – you can study the effect of getting a scholarship on, for example, college completion, without worrying whether people without scholarships are fundamentally different from people with scholarships!

In order for this approach – called a “regression discontinuity” – to work well, (a) it must be impossible, or at least very difficult, for entities to manipulate whether they’re right below or above the cutoff and (b) researchers must not stray so far from the cutoff that the similarity of subjects below and above the cutoff starts becoming questionable. Ultimately, whether these two conditions hold depends on the context and how narrow of a range around the cutoff researchers select. For example, it’s hard to control whether your SAT score is 1480 or 1490, but scoring 1300 versus 1400 is unlikely to be mostly due to chance. In other contexts, small manipulations are easy to do – for example, many firms have enough flexibility in accounting to turn slightly negative earnings into slightly positive earnings, making a regression discontinuity approach not-so-credible in this setting.

In the next post in this series (which may or may not be the next post chronologically), we’ll talk about other kinds of quasi-experimental variation. Bonus points to people who email me an article about a study they want scrutinized!

Want to be notified when I write a new blog post? Sign up here.

I don’t spam!