A while ago I did a long interview with Fin Moorhouse and Luca Righetti on their podcast Hear This Idea. Multiple people commented to me that they found our discussion of safety particularly interesting. So, I’ve excerpted that part of the transcript and cleaned it up for better readability. See the full interview and transcript here.


LUCA: I think there’s one thing here of breaking progress, which is this incredibly broad term, down into: well, literally what does this mean? And thinking harder about the social consequences of certain technologies. There’s one way to draw a false dichotomy here: some technologies are good for human progress, and some are bad; we should do the good ones, and hold off on the bad ones. And that probably doesn’t work, because a lot of technologies have dual use. You mentioned World War Two before…. On the one hand, nuclear technologies are clearly incredibly destructive, and awful, and could have really bad consequences—and on the other hand, they’re phenomenal, and really good, and can provide a lot of energy. And we might think the same around bio and AI. But we should think about this stuff harder before we just go for it, or have more processes in place to have these conversations and discussions; processes to navigate this stuff.

JASON: Yeah, definitely. Look, I think we should be smart about how we pursue progress, and we should be wise about it as well.

Let’s take bio, because that’s one of the clearest examples and one that actually has a history. Over the decades, as we’ve gotten better and better at genetic engineering, there’s actually been a number of points where people have proposed, and actually have gone ahead and done, a pause on research, and tried to work out better safety procedures.

Maybe one of the most famous is the Asilomar Conference in the 1970s. Right after recombinant DNA was invented, some people realized that “Whoa, we could end up creating some dangerous pathogens here.” There’s a particular simian virus that causes cancer that caused people to start thinking: “what if this gets modified and can infect humans?” And just more broadly, there was a clear risk. And they actually put a moratorium on certain types of experiments, they got together about eight months later, had a conference, and worked out certain safety procedures. I haven’t researched this deeply, but my understanding is that went pretty well in the end. We didn’t have to ban genetic engineering, or cut off a whole line of research. But also, we didn’t just run straight ahead without thinking about it, or without being careful. And in particular, matching the level of caution to the level of risk that seems to be in the experiment.

This has happened a couple of times since—I think there was a similar thing with CRISPR, where a number of people called out “hey, what are we going to do, especially about human germline editing?” NIH had a pause on gain-of-function research funding for a few years, although then they unpaused it. I don’t know what happened there.

So, there’s no sense in barreling ahead heedlessly. I think part of the history of progress is actually progress in safety. In many ways, at least at a day-to-day level, we’ve gotten a lot safer, both from the hazards of nature and from the hazards of the technology that we create. We’ve come up with better processes and procedures, both in terms of operations—think about how safe airline travel is today, there’s a lot of operational procedures that lead to safety—but also, I think, in research. And these bio-lab safety procedures are an example.

Now, I’m not saying it’s a solved problem; from what I hear, there’s still a lot of unnecessary or unjustified risk in the way we run bio labs today. Maybe there’s some important reform that needs to happen there. I think that sort of thing should be done. And ultimately, like I said, I see all of that as part of the story of progress. Because safety is a problem too, and we attack it with intelligence, just like we attack every other problem.

FIN: Totally. You mentioned airplanes, which makes me think… you can imagine getting overcautious with these crazy inventors who have built these flying machines. “We don’t want them to get reckless and potentially crash them, maybe they’ll cause property damage—let’s place a moratorium on building new aircraft, let’s make it very difficult to innovate.” Yet now air travel is, on some measures, the safest way to travel anywhere.

How does this carry over to the risks from, for instance, engineered pandemics? Presumably, the moratoria/regulation/foresight thing is important. But in the very long run, it seems we’ll reach some sustainable point of security against risks from biotechnology, not from these fragile arrangements of trying to slow everything down and pause stuff, as important as that is in the short term, but from barreling ahead with defensive capabilities, like an enormous distributed system for picking up pathogens super early on. This fits better in my head with the progress vibe, because this is a clear problem that we can just funnel a bunch of people into solving.

I anticipate you’ll just agree with this. But if you’re faced with a choice between: “let’s get across-the-board progress in biotechnology, let’s invest in the full portfolio,” or on the other hand, “the safety stuff seems better than risky stuff, let’s go all in on that, and make a bunch of differential progress there.” Seems like that second thing is not only better, but maybe an order of magnitude better, right?

JASON: Yeah. I don’t know how to quantify it, but it certainly seems better. So, one of the good things that this points to is that… different technologies have clearly different risk/benefit profiles than others. Something like a wastewater monitoring system that will pick up on any new pathogen seems like a clear win. Then on the other hand, I don’t have a strong opinion on this, but maybe gain-of-function research is a clear loss. Or just clearly one of those things where risk outweighs benefit. So yeah, we should be smart about this stuff.

The good news is, the right general-purpose technologies can add layers of safety, because general capabilities can protect us against general risks that we can’t completely foresee. The wastewater monitoring thing is one, but here’s another example. What if we had broad-spectrum antivirals that were as effective against viruses as our broad-spectrum antibiotics are against bacteria? That would significantly reduce the risk of the next pandemic. Right now, dangerous pandemics are pretty much all viral, because if they were bacterial, we’d have some antibiotic that works against them (probably, there’s always a risk of resistance and so forth). But in general, the dangerous stuff recently has been viruses for exactly this reason. A similar thing: if we had some highly advanced kind of nanotechnology that gave us essentially terraforming capacity, climate change would be a non issue. We would just be in control of the climate.

FIN: Nanotech seems like a worse example to me. For reasons which should be obvious.

JASON: OK, sure. The point was, if we had the ability to just control the climate, then we wouldn’t have to worry about runaway climate effects, and what might happen if the climate gets out of control. So general technologies can prevent or protect against general classes of risk. And I do think that also, some technologies have very clear risk/benefit trade-offs in one direction or the other, and that should guide us.

LUCA: I want to make two points. One is, just listening to this, it strikes me that a lot of what we were just saying on the bio stuff was analogous to what we were saying before about climate stuff: There are two reactions you can have to the problem. One is to stop growth or progress across the board, and just hold off. And that is clearly silly or has bad consequences. Or, you can take the more nuanced approach where you want to double down on progress in certain areas, such as detection systems, and maybe selectively hold off on others, like gain-of-function. This is a case for progress, not against it, in order to solve these problems that we’re incurring.

The thing I wanted to pick up on there… is that all these really powerful capabilities seem really hard. I think when we’re talking about general purpose things, we’re implicitly having a discussion about AI. But to use the geoengineering example, there is a big problem in having things that are that powerful. Like, let’s say we can choose whatever climate we want… yeah, we can definitely solve climate change, or control the overshoot. But if the wrong person gets their hands on it, or if it’s a super-decentralized technology where anybody can do anything and the offense/defense balance isn’t clear, then you can really screw things up. I think that’s why it becomes a harder issue. It becomes even harder when these technologies are super general purpose, which makes them really difficult to stop or not get distributed or embedded. If you think of all the potential upsides you could have from AI, but also all the potential downsides you could have if just one person uses it for a really bad thing—that seems really difficult.

JASON: I don’t want to downplay any of the problems. Problems are real. Technology is not automatically good. It can be used for good or evil, it can be used wisely or foolishly. We should be super-aware of that.

FIN: The point that seems important to me is: there’s a cartoon version of progress studies, which is something like: “there’s this one number we care about, it’s the scorecard—gross world product, or whatever—and we would drive that up, and that’s all that matters.” There’s also a nuanced and sophisticated version, which says: “let’s think more carefully about what things stand to be best for longer timescales, understanding that there are risks from novel technologies, which we can foresee and describe the contours of.” And that tells us to focus more on speeding up the defensive capabilities, putting a bunch of smart people into thinking about what kind of technologies can address those risks, and not just throwing everyone to the entire portfolio and hoping things go well. And maybe if there is some difference between the longtermist crowd and the progress studies crowd, it might not be a difference in ultimate worldview, but: What are the parameters? What numbers are you plugging in? And what are you getting out?

JASON: It could be—or it might actually be the opposite. It might be that it’s a difference in temperament and how people talk about stuff when we’re not quantifying. If we actually sat down to allocate resources, and agree on safety procedures, we might actually find out that we agree on a lot. It’s like the Scott Alexander line about AI safety: “On the one hand, some people say we shouldn’t freak out and ban AI or anything, but we should at least get a few smart people starting to work on the problem. And other people say, maybe we should at least get a few smart people working on the problem, but we shouldn’t freak out or ban AI or anything.” It’s the exact same thing, but with a difference in emphasis. Some of that might be going on here. And that’s why I keep wanting to bring this back to: what are you actually proposing? Let’s come up with which projects we think should be done, which investments should be made. And we might actually end up agreeing.

FIN: In terms of temperamental differences and similarities, there’s a ton of overlap. One bit of overlap is appreciating how much better things can get. And being bold enough to spell that out—there’s something taboo about noticing we could just have a ton of wild shit in the future. And it’s up to us whether we get that or not. That seems like an important overlap.

LUCA: Yeah. You mentioned before, the agency mindset.

FIN: Yeah. As in, we can make the difference here.

JASON: I totally agree. I think if there’s a way to reconcile these, it is understanding: Safety is a part of progress. It is a goal. It is something we should all want. And it is something that we ultimately have to achieve through applied intelligence, just like we achieve all of our other goals. Just like we achieved the goals of food, clothing, and shelter, and even transportation and entertainment, and all of the other obvious goods that progress has gotten us. Safety is also one of these things: we have to understand what it is, agree that we want it, define it, set our sights on it, and go after it. And ultimately, I think we can achieve it.

3

New Comment