Something a little bit different today. I’ll tie it in to progress, I promise.

I keep noticing a particular epistemic pitfall (not exactly a “fallacy”), and a corresponding epistemic virtue that avoids it. I want to call this out and give it a name.

The virtue is: identifying the correct scope for a phenomenon you are trying to explain, and checking that the scope of any proposed cause matches the scope of the effect.

Let me illustrate this virtue with some examples of the pitfall that it avoids.

Geography

A common mistake among Americans is to take a statistical trend in the US, such as the decline in violent crime in the 1990s, and then hypothesize a US-specific cause, without checking to see whether other countries show the same trend. (The crime drop was actually seen in many countries. This is a reason, in my opinion, to be skeptical of US-specific factors, such as Roe v. Wade, as a cause.)

Time

Another common mistake is to look only at a short span of time and to miss the longer-term context. To continue the previous example, if you are theorizing about the 1990s crime drop, you should probably know that it was the reversal of an increase in violent crime that started in the 1960s. Further, you should know that the very long-term trend in violent crime is a gradual decrease, with the late 20th century being a temporary reversal. Any theory should fit these facts.

A classic mistake on this axis is attempting to explain a recent phenomenon by a very longstanding cause (or vice versa). For instance, why is pink associated with girls and blue with boys? If your answer has something to do with the timeless, fundamental nature of masculinity or femininity—whoops! It turns out that less than a century ago, the association was often reversed (one article from 1918 wrote that pink was “more decided and stronger” whereas blue was “delicate and dainty”). This points to a something more contingent, a mere cultural convention.

Left: Young Boy with Whip, c. 1840; right: Portrait of a Girl in Blue, 1641. Credit: Wikimedia

The reverse mistake is blaming a longstanding phenomenon on a recent cause, something like trying to blame “kids these days” on the latest technology: radio in the 1920s, TV in the ’40s, video games in the ’80s, social media today. Vannevar Bush was more perceptive, writing in his memoirs simply: “Youth is in rebellion. That is the nature of youth.” (Showing excellent awareness of the epistemic issue at hand, he added that youth rebellion “occurs all over the world, so that one cannot ascribe a cause which applies only in one country.”)

Other examples

If you are trying to explain the failure Silicon Valley Bank, you should probably at least be aware that one or two other banks failed around the same time. Your explanation is more convincing if it accounts for all of them—but of course it shouldn’t “explain too much”; that is, it shouldn’t apply to banks that didn’t fail, without including some extra factor that accounts for those non-failures.

To understand why depression and anxiety are rising among teenage girls, the first question I would ask is which other demographics if any is this happening to? And how long has it been going on?

To understand what explains sexual harassment in the tech industry, I would first ask what other industries have this problem (e.g., Hollywood)? Are there any that don’t?

An excellent example of practicing the virtue I am talking about here is the Scott Alexander post “Black People Less Likely”, in which he points out that blacks are underrepresented in a wide variety of communities, from Buddhism to bird watching. If you want to understand what’s going on here, you need to look for some fairly general causes (Scott suggests several hypotheses).

The Industrial Revolution

To bring it back to the topic of my blog:

An example I have called out is thinking about the Industrial Revolution. If you focus narrowly on mechanization and steam power, you might put a lot of weight on, say, coal. But on a wider view, there were a vast number of advances happening around the same period: in agriculture, in navigation, in health and medicine, even in forms of government. This strongly suggests some deeper cause driving progress across many fields.

Conversely, if you are trying to explain why most human labor wasn’t automated until the Industrial Revolution, you should take into account that some types of labor were automated very early on, via wind and water mills. Oversimplified answers like “no one thought to automate” or “labor was too cheap to automate” explain too much (although these factors are probably part of a more sophisticated explanation).

Note that often the problem is failing to notice how wide a phenomenon is and hypothesizing causes that are too narrow, but you can make the mistake in the opposite direction too, proposing a broad cause for a narrow effect.

Concomitant variations

One advantage of identifying the full range of a phenomenon is that it lets you apply the method of concomitant variations. E.g., if social media is the main cause of depression, then regions or demographics where social media use is more prevalent ought to have higher rates of depression. If high wages drive automation, then regions or industries with the highest wages ought to have the most automation. (Caveat: these correlations may not exist when there are control systems or other negative feedback loops.)

Related, if the hypothesized cause began in different regions/demographics/industries at different times, then you ought to see the effects beginning at different times as well.

These kinds of comparisons are much more natural to make when you know how broadly a trend exists, because just identifying the breadth of a phenomenon induces you to start looking at multiple data points or trend lines.

(Actually, maybe everything I’m saying here is just corollaries of Mill’s methods? I don’t grok them deeply enough to be sure.)

Cowen on lead and crime

I think Tyler Cowen was getting at something related to all of this in his comments on lead and crime. He points out that, across long periods of time and around the world, there are many differences in crime rates to explain (e.g., in different parts of Africa). Lead exposure does not explain most of those differences. So if lead was the main cause of elevated crime rates in the US in the late 20th century, then we’re still left looking for other causes for every other change in crime. That’s not impossible, but it should make us lean away from lead as the main explanation.

This isn’t to say that local causes are never at work. Tyler says that lead could still be, and very probably is, a factor in crime. But the broader the phenomenon, the harder it is to believe that local factors are dominant in any case.

Similarly, maybe two banks failed in the same week for totally different reasons—coincidences do happen. But if twenty banks failed in one week and you claim twenty different isolated causes, then you are asking me to believe in a huge coincidence.

Scope matching

I was going to call this virtue “scope sensitivity,” but that term is already taken for something else. For now I will call it “scope matching.”

The first part of this virtue is just making sure you know the scope of the effect in the first place. Practically, this means making a habit of pausing before hypothesizing in order to ask:

  • Is this effect happening in other countries/regions? Which ones?
  • How long has this effect been going on? What is its trend over the long run?
  • Which demographics/industries/fields/etc. show this effect?
  • Are there other effects that are similar to this? Might we be dealing with a conceptually wider phenomenon here?

This awareness is more than half the battle, I think. Once you have it, hypothesizing a properly-scoped cause becomes much more natural, and it becomes more obvious when scopes don’t match.


Thanks to Greg Salmieri and several commenters on LessWrong for feedback on a draft of this essay.

8

1 comments, sorted by Click to highlight new comments since: Today at 12:07 PM
New Comment

Great points. In (good) science, scope matching is one of the most important concerns. I've always wondered why it doesn't have a (widely used) name.

Scope matching failures really do come up constantly in modern criticisms of new technologies, whether it's social media or AI. Probably happened centuries ago too