Should we seek to make our scientific institutions more effective? On the one hand, rising material prosperity has so far been largely attributable to scientific and technological progress. On the other hand, new scientific capabilities also expand our powers to cause harm. Last year I wrote a report on this issue, “The Returns to Science in the Presence of Technological Risks.” The report focuses specifically on the net social impact of science when we take into account the potential abuses of new biotechnology capabilities, in addition to benefits to health and income.
The main idea of the report is to develop an economic modeling framework that lets us tally up the benefits of science and weigh them against future costs. To model costs, I start with the assumption that, at some future point, a “time of perils” commences, wherein new scientific capabilities can be abused and lead to an increase in human mortality (possibly even human extinction). In this modeling framework, we can ask if we would like to have an extra year of science, with all the benefits it brings, or an extra year’s delay to the onset of this time of perils. Delay is good in this model, because there is some chance we won’t end up having to go through the time of perils at all.
I rely on historical trends to estimate the plausible benefits to science. To calibrate the risks, I use various forecasts made in the Existential Risk Persuasion tournament, which asked a large number of superforecasters and domain experts several questions closely related to the concerns of this report. So you can think of the model as helping assess whether the historical benefits of science outweigh one set of reasonable (in my view) forecasts of risks.
What’s the upshot? From the report’s executive summary:
A variety of forecasts about the potential harms from advanced biotechnology suggest the crux of the issue revolves around civilization-ending catastrophes. Forecasts of other kinds of problems arising from advanced biotechnology are too small to outweigh the historic benefits of science. For example, if the expected increase in annual mortality due to new scientific perils is less than 0.2-0.5% per year (and there is no risk of civilization-ending catastrophes from science), then in this report’s model, the benefits of science will outweigh the costs. I argue the best available forecasts of this parameter, from a large number of superforecasters and domain experts in dialogue with each other during the recent existential risk persuasion tournament, are much smaller than these break-even levels. I show this result is robust to various assumptions about the future course of population growth and the health effects of science, the timing of the new scientific dangers, and the potential for better science to reduce risks (despite accelerating them).
On the other hand, once we consider the more remote but much more serious possibility that faster science could derail advanced civilization, the case for science becomes considerably murkier. In this case, the desirability of accelerating science likely depends on the expected value of the long-run future, as well as whether we think the forecasts of superforecasters or domain experts in the existential risk persuasion tournament are preferred. These forecasts differ substantially: I estimate domain expert forecasts for annual mortality risk are 20x superforecaster estimates, and domain expert forecasts for annual extinction risk are 140x superforecaster estimates. The domain expert forecasts are high enough, for example, that if we think the future is “worth” more than 400 years of current social welfare, in one version of my model we would not want to accelerate science, because the health and income benefits would be outweighed by the increases in the remote but extremely bad possibility that new technology leads to the end of human civilization. However, if we accept the much lower forecasts of extinction risks from the superforecasters, then we would need to put very very high values on the long-run future of humanity to be averse to risking it.
Throughout the report I try to neutrally cover different sets of assumptions, but the report’s closing section details my personal views on how we should think about all this, and I thought I would end the post with those views (the following are my views, not necessarily Open Philanthropy’s).
My Take
I end up thinking that better/faster science is very unlikely to be bad on net. As explained in the final section of the report, this is mostly on the back of three rationales. First, for a few reasons I think lower estimates of existential risk from new biotechnology are probably closer to the mark than more pessimistic ones. Second, I think it’s plausible that dangerous biotech capabilities will be unlocked at some point in the future regardless of what happens to our scientific institutions (for example because they have already been discovered or because advances in AI from outside mainstream scientific institutions will enable them). Third, I think there are reasonable chances that better/faster science will reduce risks from new biotechnology in the long run, by discovering effective countermeasures faster.
In my preferred model, investing in science has a social impact of 220x, as measured in Open Philanthropy’s framework. In other terms, investing a dollar in science has the same impact on aggregate utility as giving a dollar each to 220 different people earning $50,000/yr. With science, this benefit is realized by increasing a much larger set of people’s incomes by a very small but persistent amount, potentially for generations to come.
That said, while I think it is very unlikely that science is bad on net, I do not think it is so unlikely that these concerns can be dismissed. Moreover, even if the link between better/faster science and increased peril is weak and uncertain, the risks from increased peril are large enough to warrant their own independent concern. My preferred policy stance, in light of this, is to separately and in parallel pursue reforms that accelerate science and reforms that reduce risks from new technologies, without worrying too much about their interaction (with some likely rare exceptions).
It’s a big report (74 pages in the main report, 119 pages with appendices) and there’s a lot more in it that might be of interest to some people. For a more detailed synopsis, check out the executive summary, the table of contents, and the summary at the beginning of section 11. For some intuition about the quantitative magnitudes the model arrives at, section 3.0 has a useful parable. You can read the whole thing on arxiv.
The thing is, we have many options that aren't just accelerating or decelerating the whole thing. Like we can choose gain of function research and cutting edge AI capabilities, and accelerate everything except that.
Science is lots of different pieces, differential technological development.
"25% probability that the domain experts are right x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety = 6.25%"
This smells of the "multistage fallacy"
You think of something. List a long list of "nesscessary steps". Estimate middling probabilities for each step. And multiply them together for a small end stage probability.
The problem is, often some of the steps, or all of them, turn out to not be that necessary. And often, if a step had actually happened, it would do so in a way that gave you strong new information about the likelihood of other steps.
Ie if a new device needs 100 new components to be invented, and you naively assume the probability is 50/50 for each component. But then a massive load of R&D money gets sent towards making the device, and all 100 components are made.
In this particular case, you are assuming a 25% chance the domain experts are right about the level of X-risk. In the remaining 75%, apparently X risk is negligable. There is no possibility for "actually it's way way worse than the domain experts predicted".
"x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety "
If the form of the peril is a step. Say a single moment when the first ASI is turned on, then "accelerate to safety" is meaningless. You can't make the process less risky by rushing through the risky period faster. You can't make Russian roulette safer by playing it real fast, thus only being at risk for a short time.