This is a draft essay to be published on The Roots of Progress; I wanted to share it here first for feedback. Please comment! [UPDATE: this is now revised and published]


We live in a dangerous world. Many hazards come from nature: fire, flood, storm, famine, disease. Technological and industrial progress have made us safer from these dangers. But technology also creates its own hazards: industrial accidents, car crashes, toxic chemicals, radiation. And future technologies, such as genetic engineering or AI, may present existential threats to the human race. These risks are the best argument against a naive or heedless approach to progress.

So, to fully understand progress, we have to understand risk and safety. I’ve only begun my research here, but what follows are some things I’m coming to believe about safety. Consider this a preliminary sketch of a philosophy of safety:

Safety is a part of progress, not something opposed to it

Safety is properly a goal of progress: safer lives are better lives. Safety is not automatic, in any context: it is a goal we must actively seek and engineer for. This applies both to the hazards of nature and to the hazards of technology.

Historically, safety has been a part of progress across all domains. Technology, industry, and wealth help guard against natural risks and disasters. And improved safety has also been a key dimension of progress for technologies from cars to surgery to factories.

We have even made progress itself safer: today, new technologies are subject to much higher levels of testing and analysis before being put on the market. For instance, a century ago, little to no testing was performed on new drugs, sometimes not even animal testing for toxicity; today they go through extensive, multi-stage trials. This incurs cost and overhead, and it certainly reduces the rate at which new drugs are released to consumers, but it would be wrong to describe drug testing as being opposed to pharmaceutical progress—improved testing is a part of pharmaceutical progress.

Safety is a human problem, and requires human solutions

Inventions such as pressure valves, seat belts, or smoke alarms can help with safety. But ultimately, safety requires processes, standards, and protocols. It requires education and training. It requires law. There is no silver bullet: any one mechanism can fail; defense in depth is required.

Improving safety requires feedback loops, including reporting systems. It greatly benefits from openness: for instance, the FAA encourages anonymous reports of safety incidents, and will even be more lenient in penalizing safety violations if they were reported.

Safety requires aligned incentives: Worker’s compensation laws, for instance, aligned incentives of factories and workers and led to improved factory safety. Insurance helps by aligning safety procedures with profit motives.

Safety benefits from public awareness: The worker’s comp laws came after reports by journalists such as Crystal Eastman and William Hard. In the same era, a magazine series exposing the shams and fraud of the patent medicine industry led to reforms such as stricter truth-in-advertising laws.

Safety requires leadership. It requires thinking statistically, and this does not come naturally to most people. Factory workers did not want to use safety techniques that were inconvenient or slowed them down, such as goggles, hard hats, or guards on equipment.

We need more safety

When we hope for progress and look forward to a better future, part of what we should be looking forward to is a safer future.

We need more safety from existing dangers: auto accidents, pandemics, wildfires, etc. We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero.

And we need to continue to raise the bar for making progress safely. That means safer ways of experimenting, exploring, researching, inventing.

We need to get more proactive about safety

Historically, a lot of progress in safety has been reactive: accidents happen, people die, and then we figure out what went wrong and how to prevent it from recurring.

The more we go forward, the more we need to anticipate risks in advance. Partly this is because, as the general background level of risk decreases, it makes sense to lower our tolerance for risks of all kinds, and that includes the risks of new technology.

Further, the more our technology develops, the more we increase our power and capabilities, and the more potential damage we can do. The danger of total war became much greater after nuclear weapons; the danger of bioengineered pandemics or rogue AI may be far greater still in the near future.

There are signs that this shift towards more proactive safety efforts has already begun. The field of bioengineering has proactively addressed risks on multiple occasions over the decades, from recombinant DNA to human germline editing. The fact that the field of AI has been seriously discussing risks from highly advanced AI well before it is created is a departure from historical norms of heedlessness. And compare the lack of safety features on the first cars to the extensive testing (much of it in simulation) being done for self-driving cars. This shift may not be enough, or fast enough—I am not advocating complacency—but it is in the right direction.

This is going to be difficult

It’s hard to anticipate risks—especially from unknown unknowns. No one guessed at first that X-rays, which could neither be seen or felt, were a potential health hazard.

Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this. Beyond a certain point, the task is impossible, and the attempt becomes “prophecy” (in the Deutsch/Popper sense). But within those limits, we should try, to the best of our knowledge and ability.

Even when risks are predicted, people don’t always heed them. Alexander Fleming, who discovered the antibiotic properties of penicillin, predicted the potential for the evolution of antibiotic resistance early on, but that didn’t stop doctors from massively overprescribing antibiotics when they were first introduced. We need to get better at listening to the right warnings, and better at taking rational action in the face of uncertainty.

Safety depends on technologists

Much of safety is domain-specific: the types of risks, and what can guard against them, are quite different when considering air travel vs. radiation vs. new drugs vs. genetic engineering.

Therefore, much of safety depends on the scientists and engineers who are actually developing the technologies that might create or reduce risk. As the domain experts, they are closest to the risk and understand it best. They are the first ones who will be able to spot it—and they are also the ones holding the key to Pandora’s box.

A positive example here comes from Kevin Esvelt. After coming up with the idea for a CRISPR-based gene drive, he says, “I spent quite some time thinking, well, what are the implications of this? And in particular, could it be misused? What if someone wanted to engineer an organism for malevolent purposes? What could we do about it? … I was a technology development fellow, not running my own lab, but I worked mostly with George Church. And before I even told George, I sat down and thought about it in as many permutations as I could.”

Technologists need to be educated both in how to spot risks, and how to respond to them. This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers. It should instill a deep sense of responsibility in a way that inspires them to hold themselves to the highest standards.

Progress helps guard against unknown risks

General capabilities help guard against general classes of risk, even ones we can’t anticipate. Science helps us understand risk and what could mitigate it; technology gives us tools; wealth and infrastructure create a buffer against shocks. Industrial energy usage and high-strength materials guard against storms and other weather events. Agricultural abundance guards against famine. If we had a cure for cancer, it would guard against the accidental introduction of new carcinogens. If we had broad-spectrum antivirals, they would guard against the risk of new pandemics.

Safety doesn’t require sacrificing progress

The path to safety is not through banning broad areas of R&D, nor through a general, across-the-board slowdown of progress. The path to safety is largely domain-specific. It needs the best-informed threat models we can produce, and specific tools, techniques, protocols and standards to counter them.

If and when it makes sense to halt or ban R&D, the ban should be either narrow or temporary. An example of a narrow ban would be one on specific types of experiments that try to engineer more dangerous versions of pathogens, where the benefits are minor (it’s not as if these experiments are necessary to advance biology at a fundamental level) and the risks are large and obvious. A temporary ban can make sense until a particular goal is reached in terms of working out safety procedures, as at the 1975 Asilomar conference.

Bottom line: we can—we must—have both safety and progress.

6

12 comments, sorted by Click to highlight new comments since: Today at 5:27 PM
New Comment

Jason: 

If you haven’t already read the work of the late Aaron Wildavsky, I would highly recommend it because he devoted much of his life’s work to the exact issue you tee up here. I’d recommend two of his books to start. The first is Risk and Culture co-authored with Mary Douglas, and the second is his absolutely remarkable Searching for Safety, which served as the inspiration for my book on Permissionless Innovation.

Here are a few choice quotes from Risk and Culture:

  • “Relative safety is not a static but rather a dynamic product of learning from error over time. . . . The fewer the trails and the fewer the mistakes to learn from, the more error remains uncorrected.” (p. 195)
  • “The ability to learn from errors and gain experience in coping with a wide variety of difficulties, has proved a greater aid to preservation of the species than efforts to create a narrow band of controlled conditions within which they would flourish for a time… “ (p. 196)
  • “If some degree of risk is inevitable, suppressing it in one place often merely moves it to another. Shifting risks may be more dangerous than tolerating them, both because those who face new risks may be unaccustomed to them and because those who no longer face old ones may be more vulnerable when conditions change.” (p. 197)

And then in Searching for Safety, Wildavsky went on to build on that logic as he warned of the dangers of “trial without error” reasoning, and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that wisdom and safety are born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. I’ve probably quoted this passage from that book in more of my work than anything else I can think of:

  • “The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.” 

In my next book on AI governance, I extend this framework to AI risk. 

Ah, thanks, I have read a little bit of Searching for Safety in the past, but had forgotten about this.

I largely agree with this approach. The one problem is when dealing with catastrophic risks, you can't afford to have an error. In the case of existential risk, there is literally no way to learn or recover from mistakes. In general the worse the risk, the more you need careful analysis and planning up front.

Agreed, Jason. I’ll add that it’s trendy among the longtermists to speak of biosecurity, but it seems obvious to me that the FDA, not the legality of admittedly dangerous research, is the biggest obstacle to genuine biosecurity. We could have had vaccines for Covid by spring 2020, and without Eroom’s Law we might have had them by January 2020. And we could have had strain updates in real time. So an agency that was designed to make us safe made us less safe, and many people focused on safety in this domain continue to miss the forest for the trees. Often arguments about safety are problematic because of these kinds of failures, not because safety isn’t a valuable form of progress (it is).

This is good. I would like (and was expecting to read) some more explicit discussion of exaggerated safety demands, which is sometimes called "safetyism." Clearly the idea that demands for safety shouldn't hamper progress and quality of life too much is present in this essay (and in much of progress studies in general), but it feels weirdly unacknowledged right now.

I found myself nodding along to most of this and really appreciate the positive vision and integration of safety and progress. Two critical comments:

In the last section you basically assert an alternative to the supposed progress/safety tradeoff, one which I prefer. But it left unanswered (and even unasked) a lot of the live questions I have about this topic. It seems like there are broader cultural patterns of (a) overblowing or even completely manufacturing risks, and (b) decreasing our tolerance for risk in a way that is less than intentional. And these often seem like the major source of objections to a pro-progress approach! 

Second: there's also a moral dimension to safety. I doubt if there's are processes that completely safeguard us from nuclear war, absent enough good people to maintain and man those processes. Our moral excellence in ongoing mastery of the natural world just does demand that we stop lagging in our mastery of ourselves; insofar as that lag is getting worse, that's a risk that's probably irreducible.

Hi Jason,

A few comments. I like the basic idea, but he article seems too fawning and just does not provide enough of a Scylla and Charybdis of where "safety" goes right and where it can go wrong. The hidden context, I believe, is the high-profile catalyzing exposure of x-risk and longtermist ideas to the broader public.

Here are less than a few thoughts on some of your statements.

"Safety is properly a goal of progress."

Certainly safety is not properly a goal of progress, any more than seatbelt is a goal of fast transportation. Safety is one method of achieving progress by reducing risks, costs, or "the error rate."

"We’ve made a lot of progress on these already, but there’s no reason to stop improving as long as the risk is greater than zero."

The law of diminishing returns applies to safety as to everything else. It's precisely when people talk about the safety as though "every little bit helps" that we get nonsensical regulation, unnecessarily high costs, disastrous environmental review, IRBs which kill social science. There must be reasonable way of deciding which risks must continually be decreased and which we can and should live with. Safety can be cudgel against progress, even though it can also be a helpmate of it.

"Being proactive about safety means identifying risks via theory, ahead of experience, and there are inherent epistemic limits to this."

This point is good and could use expansion. What are the limits? When are they greater and when are there lesser epistemic limitations?

"This should be the job of professional “ethics” fields: instead of problematizing research or technology, applied ethics should teach technologists how to respond constructively to risk and how to maximize safety while still moving forward with their careers."

I don't know what it means to "problematize research", research seems problem-ridden already. But also this comment seems to contradict an earlier point where you stated that engineers are best situated to work on the safety of the systems they build. Which is it? The engineers or the ethicists?

I have a bioethicist on my team, and I think he's invaluable because he offers a coherent method for thinking through ethical problems (especially end of life issues and informed consent issues). But it's important to recognize that his particular method is dependent, as all ethical systems are, upon a particular metaphysics, to use a dirty term loathed by most ethicists. Not that we have to wait for everyone to have the same metaphysics to work on big safety or big progress - we could never do anything in that case. For in that case, we'd be stuck like Russ Roberts, in his articles against utilitarianism, unable to judge whether free trade is worth the cost of one person's workforce participation. But metaphysics does offer some guidance about tradeoffs we should and shouldn't make by providing useful if, sometimes vague, definitions of human life, human flourishing, human moral responsibility. There are real differences between people on these definitions, which lead to very different ethical conclusions about which tradeoffs we should and should not make. Indeed, how much safety we should invest in is informed by our metaphysical and meta-ethical assumptions.

I think trained ethicists with an engineering or medical degree are extremely helpful. In our space that's a minority opinion. But, like having a lawyer well-versed in case law, the excellent ethicist can quickly see different implications and applications of a process, and be able to provide advice on potential pitfalls, low-cost safety features, and mis-applications that were not immediately obvious consequences.

This is now revised and published, thanks all for your comments! Some key revisions:

  • Calling safety a *dimension* of progress instead of a “part”
  • Discussion of tradeoffs between the dimensions
  • Discussion of sequencing in general and DTD in particular

Very well written. 

An exception to the theory that "Safety is properly a goal of progress" is the research and development for the military. One could argue that guns, tanks, and drones are begin developed to protect the citizens of the country but by causing harm to the opposing faction. I guess it could be rephrased as " Safety of the consumer/user/beneficiary is a goal of progress". It is pedantic at best. The development of nuclear weapons opens a whole new can of worms, where one can argue that each country must develop the weapons for its own security. In either case, as you mentioned risks of these technologies must be determined and mutually agreed upon as policy in a democratic setup (which includes technocrats, politicians, and civil society) before proceeding with development.

If and when it makes sense to halt or ban R&D, the ban should be either narrow or temporary.

I think the counterexample of nuclear engineering is instructive. It was not halted, but remains heavily regulated internationally, and for that reason, progress in nuclear power has been greatly slowed - but so has proliferation.

It was halted de facto if not de jure, at least in the US.

I think if it had not been stunted, we'd have lots of cheap, reliable, clean nuclear power, and I doubt that nuclear proliferation would have been significantly accelerated—do you think it would have been?

There's a reasonable argument that we all lose out by calling programmers software engineers, given they lack the training in forensic analysis, risk assessment, safety factors engineering, and identifying failure modes that other engineering disciplines (i.e. civil, mechanical, materials, electrical) have to train in. 

https://www.theatlantic.com/technology/archive/2015/11/programmers-should-not-call-themselves-engineers/414271/

A lot of energy in the AI safety conversation goes into philosophizing and trying to reinvent from scratch systems of safety engineering that have been built and refined for centuries around everything from bridges to rockets to industrial processes and nuclear reactors, and we are worse off for it.

I am curious how we can use the theory of AI Safety (although it is still in its formative stages) to address safety and progress. Progress safety seems very important, it could be approached from policy level or exploratory engineering. Perhaps there is an overlap between AI safety and Progress safety. Overall, very intriguing  idea to think about , looking forward to the full comment. 

It seems to be related with moral uncertainty,  that is, when trying to understand whether to pursue a technology that has a possibility of existential risk but has positive externalities that outweigh the hazards, do you choose to pursue it or not.