Great point. I wish we had more ideas about how to improve this. So many places we might try to fix this: philanthropists might redirect funding. We might try to provide career paths for these institutions' employees that spanned the space of current problems and not just the one problem they work on.
Related factors: it appears that harms due to technological change are much smaller than benefits due to technological change, and also much smaller than harms that we already suffer on an ongoing basis (like deaths due to disease).
Great article! I think you expressed The Argument well and similarly to how I see it expressed by those who believe it.
I’m always surprised by how many tools are available to evaluate the argument…and that its fans rarely use any of them. It’s great to see you use some of these tools to critique it!
By way of comment: at the same time, your article leaves the argument looking more plausible (to me) than it probably is, just because your critiques don’t include as many angles as it might from progress studies (especially the scientific method and the history of technology). My attempted survey of the possible angles, some but not all of which you tackle:
Most catastrophic risks have a lot of evidence to tell us how much we should worry about them (the history of infectious disease outbreaks, nuclear accidents and near-accidents, etc). The argument never comes with any evidence. Worse yet, it’s rarely presented as a hypothesis to be falsified, but instead as speculation. This is especially surprising because their main catastrophic scenario is an accident, and accidents are one of the most common and well-studied kinds of risk (auto accidents, the Tacoma Narrows Bridge, airplane accidents, policies for canceling ferries in dangerously bad weather, nuclear power plant accidents, accidents involving Covid in a Wuhan laboratory vs. Wuhan seafood market, etc). Accidents are studied by all sorts of people including actuaries, government technocrats, and popular authors. Successful predictions of catastrophe (or anything) are almost always based on evidence.
More generally, the argument is usually presented without any scholarship or context outside of speculative philosophy. But there is lots of scholarship to know (beyond the above) from the histories of technology, human well-being, and predictions of apocalypse, and probably many other domains.
A cost-benefit analysis would be needed if the argument were to be made credible. Lifespans are about 35 years shorter in poor countries than they are for Japanese and Swiss women, and about 15 years longer for the richest US females than the poorest US males, so it’s a good estimate that 25+ years of life are lost by the average person due to risks that can be attacked by anti-poverty, public-health, and economic growth measures alone. Peter Attia is probably right that exercise, sleep, and food account for another 10 years. As you say, the argument glibly assumes that AI will solve pretty much any problem it needs to solve to kill us all. We have no reason to believe that, but those who do surely should also believe that the AI will solve any problem it needs to to gain that 35+ years of life for the average person among the 8 billion of us. At this rate, even Scott’s estimated 2% risk of an AI apocalypse looks like a bargain. The context provided by cost-benefit analysis also reminds us of where we ourselves should focus our attention. And of course the likely upside of AI doesn’t just depend on a glib assumption of AI capabilities — AI is a general purpose technology, so progress studies tells us something about what upside to expect.
Finally, the argument is rarely presented with a plausible mechanism.
Agreed. And we already have fake empathy on tap in novels, tv shows, and movies. It does have it's pleasures, but it didn't replace us, and neither will fake empathy from bots.
What is the problem of missing novelty in drug development, and what can be done to fix it?
Great points. In (good) science, scope matching is one of the most important concerns. I've always wondered why it doesn't have a (widely used) name.
Scope matching failures really do come up constantly in modern criticisms of new technologies, whether it's social media or AI. Probably happened centuries ago too
Great post. I would add that when we talk about lifting people out of poverty we're literally talking about increasing their consumption. Consumption is also a synonym for the economic part of well-being.
I'd venture to speculate that the main reason we aren't better at reducing poverty, increasing middle-class well-being, making life better for families, fighting disease, and other important goals, is because we don't pay enough attention to increasing median consumption.
What are your 3 favorite things that are coming soon in materials?
I love your essay's attempt to draw out the domains of infectious disease and compare them with regard to progress via broad measures and more specific measures.
As someone who reads a lot of the news about new papers in both domains, I see the two domains are very similar in this regard. The floor upon which cancer research rests is the commonalities between cancers. For decades there has been a War on Cancer, and lots of things are unified across cancer, including public health efforts to avoid carcinogens.
Most striking, look at the pipeline page for any pharma company with a major cancer effort -- like https://www.pfizer.com/science/oncology-cancer/pipeline -- and you'll find individual drugs being tested on cancers in multiple organs. For example Pfizer calls one of its compounds Braftovi, and they have Phase 2 or 3 trials using it for melanoma, colorectal cancer, and lung cancer. Similar marketed cancer drugs, like Gleevec, which is used for leukemia and other blood cancers, gastrointestinal stromal tumors, and skin tumors)
So the pharma intrustry is clearly fighting a war against cancer as a whole. Sam's comment is perhaps a bit of an exaggeration -- I've never heard a cancer researcher say that cancers are "mostly unrelated to each other" -- but, more important, such comments have the context that cancers are a bundle of closely-related diseases, and everyone either knows this or thinks they're a single disease.
I recommend https://en.wikipedia.org/wiki/The_Hallmarks_of_Cancer as the major framework that pulls together cancer into the major things that make a cancer succeed, including a somatic-mutation-rich environment, chronic inflammation, ability to replicate endlessly, evade the immune system, avoid programmed cell death, grow new blood vessels, and metastasize.
Another key to understanding cancer: knowing about the Oncogenic signaling pathways (p325 here: https://www.cell.com/cell/pdf/S0092-8674(18)30359-3.pdf). As far as I can tell these are much closer to being the actual "different diseases" in play than the various cancers named by organ. I don't know how these pathways work, or even how exactly cancer breaks them -- that's a lot of molecular biology! -- but just knowing their names I find the daily news about the cancer literature more legible.
Looping back to the comparison made in your essay between infectious diseases and cancers, I'm struck that we started fighting infectious disease a lot earlier. For example John Snow figured out that you could avert cholera by keeping sewage out of the water supply in about 1850, and if I recall Steven Johnson's book about public health, this was the beginning of a whole series of public health interventions against infectious disease. The idea of wearing sunscreen to avoid skin cancer seems to have come in during my childhood, over 100 years later. Same for the idea of averting lung cancer by not smoking.
I would list 3 major advances that have reduced infectious disease: public health measures, vaccines, and antibiotics. For cancer, the most effective measures are probably public health measures, surgery, chemo, and now immunooncology. Those best tools are all less effective than the best tools we use to fight infectious disease. That probably has a lot to do with how late the groundbreaking research started. Immunooncology only made its transition from minor theory to major treatment in about 2010. At a research level, treating cancer requires understanding molecular biology, and that discipline only really exploded in about 2000 when sequencing became cheap enough that we first sequenced a single human genome. There are obviously more big drug discoveries in the works, like CAR-T and cancer vaccines -- let's hope those scale, and let's hope there are more new discoveries on the way.
So well put, the "succeed by its own lights" thing is such an important idea, and probably not articulated enough.