Movement alone isn’t progress, and there are dangers in ignoring dimensions and directions of acceleration.

Note: This is cross-posted from my Substack on Exploring Cooperation, responding to a recent post by Helen Toner,  and I think that post, and hopefully my long-form reply, will be of interest to the progress studies community.

Progress, Acceleration, and the Fragility of Civilizational Defaults

In debates about progress, societies seem trapped between two crude narratives. On one side, all progress is framed as inherently heroic—proof of human ingenuity, resilience, and ambition. On the other, any concerns about technologies, much less calls for regulation or opposition, is cast as luddism and bureaucratic overreach fighting the engines of prosperity. But the simplistic counter-narrative that technology leads to ruin, while slow and natural changes were what created an imaged beautiful past is also deeply confused.

We need to reject the simplistic choice between the supposed fragile dynamism leading to ruin and the imagined pastoral beauty of only allowing slow changes.

It is true, as many in progress studies claim, that building things, deploying infrastructure, improving logistics, and expanding capacity is an almost unalloyed good, too often suppressed by governance choke points. Additionally, but differently, many argue that the solution is pushing the frontiers of AI, synthetic biology, or other civilization-scale technologies to accelerate progress.

But once the two statements are made, it becomes clear that conflating material progress and technological acceleration fails to distinguish between building things, on one hand, and pushing intentionally disruptive technologies on the other. And that means both acceleration and luddism are not just wrong, but category errors, each lumping two very different things together. But too many technologists celebrate acceleration as an unmitigated virtue, even though the governance systems which make progress beneficial were built for a slower, more predictable world, and default to clumsy attempts at control.

This essay tries to explain how the frames are confused, what a better mental model looks like, then, finally, in light or the risks from future artificial intelligence, gestures towards potential solutions that hopefully push away from pure technocracy.

Cooperation, Stability, and the Problem of Changing Games

As this series has repeatedly noted, cooperation is achievable in some games, but not others, and is fragile in some games, not others. For example, prisoner’s dilemmas are fragile, since defection is myopically beneficial, but stag hunts are marginally less fragile, since smart people would be able to coordinate, and repeated prisoner’s dilemmas are even more robust, since even “dumb” strategies can allow reasonably robust long term strategies. But fundamentally, changing the rules changes the game, which changes the best strategies. And players can engineer these rule changes by reframing the game or adding features. Abram Demski pointed out that most prisoner’s dilemmas “are” stag hunts, “because there are potential enforcement mechanisms available.” That is, the players can change the game. But the games can also change depending on the time frame, or scale.

However, predictable, cooperative systems require stable rules and shared priors, not constantly changing games. In game theory, what I will refer to as probability involves known unknowns; the dice may roll a six or a one, but the distribution is known, and players can optimize their strategies accordingly. This predictability alongside probability allows mixed strategies to work in games like Rock-Paper-Scissors, where the best move is to randomize predictably. The mutual knowledge of this predictable randomization allows for stability. Even in more complex games, probabilistic reasoning lets players balance risk and reward within well-defined boundaries. This means that even if outcomes are not deterministic, they are calculable, and players can settle into stable, if sometimes messy, equilibria.

Even structural factors can be probabilistic; games that repeat an indeterminate number of times allow more stable cooperation than those with a set finite number of rounds. And Robin Hanson pointed out that Aumann convergence occurs despite differing priors, i.e. unknown knowns, if the uncertainty or dispute is based on agreement about “pre-priors” - differences in assessed probabilities, but agreement about where the probabilities come from.

Technological change can be a one-time or one-dimensional event that is probabilistic, or it can be something else, namely, destabilizing and uncertain. We’ve seen both, historically. At different points in history, the international situation changed as a function of the technologies being used—the game changed, so the players did different things. I previously argued that the failures to cooperate were contingent, and that the successes were perhaps more due to inevitable technological changes. But the technologies shifted in different ways at different times. The types of cooperation and the types of slow and expensive wars that made things more or less stable before World War I turned into alliance systems, and when the lethality of weapons changed and the fragility of the alliances and the contingent dragged the world into a conflict, it led to structural uncertainty and blundering unaware into a war that escalated to a previously unseen scale. The failure of economic and international cooperation, among other things, was destabilizing, and this led to WWII.

But this was around the point where the weapons got deadly enough that humanity could no longer survive escalation and total war, so the current international system was built to stabilize the international order, transforming chaos into economic competition. That is, increased international trade and economic interdependence built the modern world, transforming human ambition from negative-sum battles to positive-sum economic competition.

This itself was a technological change, bolstered by a series of technological changes - but the changes were largely along understood dimensions. The social technology of capitalism and markets drove logistical technologies like containerized shipping and kanban, and allowed for a blossoming of telecommunication and computer technologies, culminating in the internet, big data, machine learning, and AI. For most of this time, the system adapted and stayed stable - the largest shift was the collapse of communism, which in many ways reinforced the trajectory of international cooperation, rather than destabilizing it.

Navigating the uncertain world requires a shared map, and the hope that the landscape isn’t shifting too rapidly to move safely.

What I’m calling uncertainty, in contrast to what I called probability, is the realm of unknown unknowns, of not knowing the payoff matrix, or not knowing the pre-prior probabilities that the other player is using. This is where cooperation won’t emerge, or if it does, will quickly fail, when strategies that rely on trust, enforcement, or even rational prediction are brittle to unexpected and/or rapid changes in payoffs. And that happens when the underlying assumptions are unstable or opaque. In deterministic or probabilistic iterated games with stable rules, stable solutions can emerge. Given time and enough stability for the players to figure out the game, there is a change for cooperation. But with rapid changes and uncertainty, the issue isn’t merely that a player might lose a round—it’s that players can’t even tell if they’re in the same game.

Change Doesn’t Always Destabilize

Rapid technological change can shift systems from the probabilistic to the uncertain, eroding the foundation on which cooperative systems are built: the expectation that, given enough time, the rules will hold still long enough for cooperation to pay off.

We have seen this throughout history. Noah Smith recently pointed out that "An armed society is a polite society," based on a study of “The Effects of Extra-Somatic Weapons on the Evolution of Human Cooperation toward Non-Kin”. The setup for that study is an evolutionary iterated prisoner’s dilemma with or without more deadly weapons. Better weapons leading to greater cooperation is a single dimension, and the study proposes that as weapons got deadlier, they made intra-group cooperation more beneficial. And historically, societies with better weapons scaled, grew, and invaded their neighbors. But this was a single shift, along a single dimension.

Looking more broadly at this dimension, at different points in history, weapon technologies led to offense dominance and improved governance technologies allowed the formation of empires, while other times, technologies led to defense dominance, and the costs of scaling governance and the limits of communication and coordination led to more fragmented countries. Again, at each stage, the international situation was a function of the technologies being used - the game changed, so the players did different things. But the changes took decades, or centuries, allowing shifts into new stable equilibria. There has been a tension of defense-dominant versus offense-dominant technology throughout history, but either way, even when the deadliness of wars has increased and the world has gotten smaller, the game didn’t shift in fundamentally unexpected ways.

But fast-forwarding to the end of World War II, the weapons got deadly enough that humanity could no longer survive escalation and total war. The game had changed - and the current international system was built to stabilize the international order. Proposals for a single world government from otherwise libertarian thinkers like Heinlein arose because it seemed like the only possible way for humanity to survive the nuclear age. Thankfully, this didn’t occur, and didn’t lead to the static future envisioned by those most worried about nuclear weapons.

As an aside, there is a contention based on a paper by Ćirković, Sandberg, and Bostrom that the observed success of cooperation could have been unlikely but lucky, and we just don’t know it. That is, there might be a post-hoc selection effect, and only universes where we didn’t all die in a nuclear holocaust could ask the question of how and why cooperation in a nuclear-enabled world occurs, ignoring the fact that most counterfactuals are screened off by the so-called anthropic shadow.

However, at least in our contingent branch of observed history, mutually assured destruction changed the game towards cooperation, and instead of building defensive borders to keep out attackers, nations needed missile defenses and nuclear escalation to provide mutually assured destruction. This was not positive-sum, but it was stable for several decades - which point brings us to the parts of history I actually remember, the late 1990s and the decades since.

Dynamism versus Stasis

Helen Toner’s excellent new writeup about Dynamism versus Stasis brings up Virginia Postrel’s The Future and Its Enemies, released in 1998. Toner explains that it predicted a critical flaw of much work on AI safety - and it did this far in advance, when Eliezer Yudkowsky was first proposing ways to build AI, and Nick Bostrom was happily speculating that policymakers might simply make sure “AIs would not endanger human interests,” or that it might be fine anyways, and that “technophobics” could not possibly stop the creation of Superintelligence.

Postrel’s explanation of “the growing conflict over creativity, enterprise, and progress,” per the subtitle, is that the tension over the future and new technologies pits those on both sides of the political spectrum who are “deeply pessimistic about the future” against a “reactionary…cultural vitality.” In other words, as Toner quotes, “The Future and Its Enemies is a manifesto for what Postrel calls ‘dynamism—a world of constant creation, discovery, and competition.’ Its antithesis is ‘stasis—a regulated, engineered world... [that values] stability and control.’”

As with any dichotomy, this is a tremendous simplification. And as Toner admits, there is a new strand of AI safety that embraces more dynamism. I’d also add to the evidence the alignment of progress studies groups, one of which now employs Postrel, with groups that are at least strongly concerned about AI risks. And Toner admits that this is compatible with a new strand of AI Safety, where Stephen Casper, among others, argues for a more dynamist view.

But there are certainly AI-safetist arguments for stasis, most notably Bostrom, with his urn model of technologies that inevitably lead to doom, and several of his other papers that seem favorably disposed to global dictatorship to save the world, and the preference of many AI safety groups for fewer AI firms and an eventual AI future guarded by a single superintelligence and a nanny state keeping us safe. Toner correctly notes this rejection of dynamism by otherwise libertarian thinkers. And I’ve made claims in a similar direction, positing that technological fragility could be dangerous - albeit arguing against Bostrom’s overly simplistic urn model, replacing it with a slightly more ML-flavored analogy of an explore-exploit tradeoff in a landscape with unsafe areas. But we’ll need to come back to that, since as I’ll explain below, I now think my earlier paper doesn’t capture the dynamics correctly.

Empirical Libertarianism Under Acceleration

Postrel’s book doesn’t just explain the dichotomy, it presents a manifesto for dynamism—a world of constant creation, discovery, and competition—contrasted with stasis, a regulated, engineered world valuing stability and control. Without trying to explain my views on Postrel in depth, the manifesto emerges from her empirical embrace of libertarianism. That is, she says, with some justification, that libertarian capitalism works, and alternatives do not. Therefore, she promotes a system where individuals have freedom alongside “understandable, enduring, and enforceable commitments.” As a further tenet of dynamism, this should allow people to create “nested, competing frameworks of more specific rules.” So dynamism, per Postrel, is about allowing bottom-up emergent success, which only free markets are seen to allow.

But this closely resembles the heterotopian ideal I criticized recently, where proponents expect that there will be eventual cooperation despite lacking any central enforcement. And it brings to mind a quote which I can’t find a source for, that “Libertarians are like house cats, they’re convinced of their fierce independence while dependent on a system they don’t appreciate or understand.” And that’s reductive and unfair. In fact, I agree that the libertarian ideal is generally a good direction to move in from an overly stasist present. At the same time, I think Yudkowsky’s point about utilitarianism, paraphrased, applies here; move two thirds of the way from stasist to dynamist, then stop.

But I think acceleration, and rapidly changing rules of games makes Postrel’s libertarian dynamism less viable in the future. That is, yes, there needs to be a strong foundation of centralized power to allow the enforcement of contracts. But instead of criticising libertarian failures, I want to point out that the (largely correct) empirical claims of success seems likely to fail in the face of accelerating changes. The argument, however, is far from original. Instead, the problems created by dynamism in an accelerating world was explained by a book even older than Postrel’s - Toffler’s 1970 Future Shock.

Toffler’s central argument, which preceded the wide adoption of home computers, much less the internet, was that the accelerating pace of technological, social, and cultural change creates a psychological state of disorientation, anxiety, and overload. He concludes that unless societies consciously manage the pace and types of change, they are likely to become destabilized and dysfunctional. On the other hand, to defend libertarian views, the history of authoritarianism and control by governments is at best ineffective, and in practice leads to horrific outcomes, and the alternative is obviously superior. Similarly, interference in free markets has a checkered past - though those arguing this side rarely cite post-Soviet Russia as their exemplar of moving to free markets.

Postrel suggests we can allow humans to create order and impose rules where they want, but this falls prey to a central challenge to libertarianism, that is, externalities. The fact is that innovation imposes costs, and as technology develops, people don’t always create systems that enable them to flourish. And it seems at least arguable that chaos of the past 20 years with accelerated adoption of the internet, then social media, then algorithmic feeds are effectively proof that Toffler was correct, around 30 years early.

But before we get back to acceleration, it seems worth going back to Bostrom’s concerns, through a different lens.

Misframing Progress: Building versus Transforming

The conflation of 'progress' is evident in discussions around progress studies, libertarianism, and tech accelerationism. Virginia Postrel’s work, while insightful, pits dynamism against stasis, often aligning dynamism with a libertarian viewpoint that values constant creation and competition. However, this framing can obscure the specific challenges posed by different types of technological advancement and why some who are libertarian on many issues become stasists concerning AI policy.

Progress studies advocates often group together the need for scientific progress and the need for reducing barriers to building. This makes sense, in that many of the barriers to building (literally and metaphorically) are barriers to both economic dynamism built on stability and to economic disruption based on technological advances. And both sets of barriers are bad! But unbounded advocacy for dynamism, without distinguishing the nature of the progress, is empirically on very shaky ground.

That is, technological progress is always good, except when it isn’t. Material progress (housing, logistics, energy) can accelerate, and each type of single-dimensional change leads to largely predictable changes in games. Identifying and removing governance choke points, and fighting generalized NIMBYism, is all in this category.

But not all progress leads to dynamism. To return to the central example, there is a debate about whether AI will be a normal technology or something meaningfully different. And the complex world in which indirect impacts of a technology can upset entirely unrelated dynamics seems to support the idea that each new technology is a wildcard.

So perhaps we need to return to Bostrom’s claim that some technologies have the potential to devastate civilization “by default”. In domains like AI and bioengineering, where we see huge global shifts coming, this view (which I will reject,) says we should aim for the optimal amount of chaos given the expected impacts of the technology. That is, we might say that the challenge is to balance the benefits of dynamism with the risks of AI, or more broadly, with the costs of unbounded accelerationism, finding a place where people have freedom to innovate without accelerating to the point where the center cannot hold. This argues for the “compromise” position, where a healthy society is one where calls for slowing down are not immediately discarded as stasist but are treated with appropriate caution given the risks.

But this is facile, and misunderstands the difference between evolution and transformation. The point is not that we should want a medium degree of progress, any more than we want a medium degree of negative externalities in general. Instead, the point is that we dislike imposed costs and instability, and like progress. It’s just that they sometimes conflict, and we want paths that both minimize costs and maximize benefits - ending up in the middle is at best a mediocre solution.

I will apologize for again leaning into “abstractions and thought experiments” which Toner (reasonably) distrusts, but I think, or at least hope, that a better mental model for technological progress and stability will clarify my point.

Replacing Urns With Chaotic Attractors in the Space of Technologies

Most technologies don’t have the potential for huge impacts. Above, we talked about whether technologies change the world along a single dimension, or not. This itself is not binary, because second-order effects have a tendency to be unpredictable. And there is a debate about whether AI will be a normal technology, or something meaningfully different.

But any large-enough change has the potential to create unanticipated impacts. My favorite example of this is that automobiles indirectly created the 1950s counterculture and the sexual revolution. That is, by relaxing the formerly constraining social dynamics of neighborhoods and families, where young adults couldn’t date without a dozen people knowing about it, and couldn’t meaningfully act out in ways that weren’t observed by neighbors. The complex world in which indirect impacts of a technology can upset entirely unrelated dynamics seems to support Bostrom’s idea that each new technology is a wildcard.

Change generates heat that is incompatible with static systems, and when the stable systems try to contain dynamism, the argument is that they will crack.

At the same time, there is clearly a gradient between zippers, which changed how we wear clothing, and canning technology, which transformed food production and distribution, and automobiles, which transformed both transportation and social dynamics. Each had large impacts, but the narrower the immediate impact, the less it makes sense to say that new technologies have unpredictable eventual results. Yes, these complex adaptive and chaotic systems don’t have default positions or behaviors, they have chaotic dynamics. But, critically, chaotic does not mean random.

Bostrom’s idea is, again, that technologies are balls that are drawn from a distribution of future technologies. Among other drawbacks, this analogy presupposes a single actor making discrete decisions. Bostrom’s claim is that black balls have the potential to devastate civilization “by default” - that is, unless there is “ubiquitous real-time worldwide surveillance” combined with a strongly static authoritarian power. This is both ignoring the fact that most changes aren’t revolutions, and that most revolutions don’t completely overturn the world. That is, it ignores the complexity of social and sociotechnical systems as complex adaptive systems.

Even far short of Postrel’s ideal world, we see that the invention and pursuit of technologies is a chaotic multi-actor system. The games being played by humans, that shape civilization, are dictated by the point in a high-dimensional technology space. Concretely, again, moving along the transport axis from horses to cars shifted the social dynamics of courtship. But most changes don’t shift the orbit of these systems materially.

To propose a new and more complex model, I would replace black balls with chaotic attractors, parts of the high–dimensional space of technology which materially shift the landscape. When any new technology is introduced, it changes the accessible options and payoffs in the game - instead of selling your goods at the local market, you can sell them online. Conversely, instead of building defensive borders to keep out attackers, you need missile defenses and nuclear escalation to provide mutually assured destruction. But most of the time, the impact of a single new technology is minor, and doesn’t completely overturn the extant dynamic. Many of Bostrom’s white balls are in this category, or in ergodic terms, regular points in the space of technologies.

In other cases, perhaps analogous the Bostrom’s gray-balls, new technologies lead to critical points and multi-path bifurcations, where many very different paths are now accessible. Machine vision and big data enables mass surveillance and stable dictatorships, but it seems that this isn’t a necessary path, just a now-accessible one.

And finally, some parts of the space of technology lead to fixed points, where future dynamism ends. Perpetual dictatorship or extinction would qualify, and in our new model, Bostrom’s claim that black balls exist and will eventually be pulled from the urn collapses into a claim that there are civilization-ending global attractors in the space of technology.

Acceleration and Future Shock

This brings us back to Toffler’s argument. That is, the pace of technological progress is the speed at which we move through the space of technologies - and this isn’t in the hands of any one decision maker. The pace of change is brutal to attempts at preserving adaptive systems. Of course, we have learned from evolution that even if each change is random and strongly negative in expectation, the cumulative process of adopting positive changes and discarding negative ones can be tremendously beneficial. On the other hand, there is an optimal rate of mutation for evolution, and in biology, that optimal rate decreases as the organisms become more complex.

Relatedly, as we noted earlier, iterated games with stable rules lead to stable solutions, given time for the players to figure out the game. The faster we move through the space of technologies, the less time there is for stability to form. Toffler predicted the end of technocracy because it moves too slowly. That is, it is only effective when change is slow enough, and that is no longer the case. Even in the 1970s, Toffler noted that ”its planning dealt with futures near at hand.. One- or two-year forecasts are regarded as long-range planning.” This changed in the 1980s and 1990s with better tools for public policy, but the needle has been shifting back as technology accelerates. (And this seems like the right place to mention AI-2027, which has been attacked, but outside of the world of EA and AI-safety, with few specific disagreements by those who dismiss its timelines, and even fewer with similarly detailed timelines extending further out.)

On the other hand, anticipating arguments like that of Postrel, Toffler notes that “Arguing that planning imposes values of the future, the anti-planners overlook the fact that non-planning does so, too often with far worse consequences.”

Preserving Dynamism Requires a Predictable World!

The dynamist position of wanting to enable understandable, enduring, and enforceable commitments is made far harder by rapid technological change. Similarly, the nested, competing frameworks of more specific rules only work when there is time for these systems to develop and be deployed.

But AI policy is happening in the context of rapidly shifting regimes, and there are actual conflicts between what enables freedom by actors within the AI industry and what enables dynamism by the rest of society. Stasis may try to fight back against this by ensuring the technological dynamism is arrested completely, that we exit the chaotic space into a fixed point.

The social fabric frays under pressure, and must be patched.

Postrel embraces dynamism with examples of the inventiveness of beach volleyball, embracing the idea that play is a valuable end, not a means. This is in contrast to Bostrom’s recent musings, it seems that Postrel’s view, and Toner’s, is that there is no utopia in stasis. Any “solved world” is a rejection of dynamism. But freedom requires not just the allowance for lack of structure, but underlying predictability. And innovation of all sorts imposes costs, and people don’t always create systems that enable them to flourish; the game theory for why has been a recurring theme of this series.

As a slight aside, I’ll mention a point from the previous post, and claim that what I called Friston blankets are a useful metaphor here. They define the boundaries within which a system can maintain coherence—separating internal beliefs and models from chaotic external noise. Social institutions function similarly, providing a membrane between individual cognitive systems and external change. When change is too rapid, that membrane breaks down—creating not just epistemic confusion, but an inability to maintain identity and cooperation across time. Rapid technological shifts risk collapsing those boundaries faster than they can be redrawn.

But returning to the point, the challenge here isn’t about finding an optimal pace of technological change and chaos, and the solution certainly doesn’t involve micromanaging people's consumer decisions, nor government intervention in general. On the other hand, to pose this as a pair of Umeshisms; if you’ve never complained about government overreach, they’re not regulating enough, but if you’ve never complained that there ought to be a law, they are regulating too much.

But that’s not the situation in domains like AI and bioengineering. There, we see huge global shifts coming, where straight lines on graphs show that the world is poised to undergo a transition somewhere north of the agricultural revolution, and arguably more intense than the emergence of hominids. Given that, we should, in fact, consider the optimal amount of chaos to the expected impacts of the technology - and at least consider the claim that optimum could lie at the degenerate solution.

Conclusion: Not Solutions, But Directions

After going down a tortuous route of responding to a straightforward claim that stasis versus dynamism explains the objections to AI safety by saying it’s far more complex, and that my thinking on this explains the whole situation so much better, I will certainly admit that Toner is correct to have picked this as a key dimension, and I don’t think we disagree. And I’ll make the obvious and hackneyed rhetorical claim that we need something different, a synthesis rather than a compromise.

Before getting there, I would argue that the costs of unbounded accelerationism are ruinously high, and we need to find a place where people have freedom to innovate without accelerating to the point where the centre cannot hold. Neither libertarian dynamism nor authoritarian stasis works under acceleration if societies can't culturally distinguish where acceleration is heroic versus reckless. But this is a real risk-risk tradeoff. Navigating this very high dimensional and uncertain space requires far more engagement with the details, and less abstract theorizing of the types I have been enjoying here.

However, I will suggest a step towards a solution. Specifically, we need to stop valorizing change for the sake of change, and innovation for the sake of disruption. Here I find myself partly agreeing with Gary Marcus and the AI Ethics community that the Silicon Valley mindset of disruption at all costs is dangerous and destructive. But so is the universalist degrowth and stasist counter-argument. The conversation so far, however, has leaned towards technocratic solutions, which seems to be a mistake.

Civilizations that thrive in the future will not be those with the best planning algorithms or most efficient regulatory choke points, nor those that embrace general technophobia. It will be those that find places where we can valorize differential progress, and caution, even extreme caution, in the few places where it is warranted. While the call for 'synthesis rather than compromise' can be a cliché, the underlying principle is vital here. I will not outline the synthesis, but I reiterate that the costs of unbounded accelerationism are ruinously high.

We do not need better planning. Not more technocratic calibration. Not better governance choke points. But across the board, we need systemic and holistic mental models, and a renewed civilizational immune system of norms, defaults, and cultural narratives that treat the acceleration of the unknown as a decision that requires collective buy-in, while still valorizing building. So as much as we don’t need collective democratic governance with checks and balances for building bridges, or defaulting not to do so, we certainly need an active choice about whether or not to build smarter-than-human AI.

This requires epistemic humility under acceleration. It requires recognition that we can (and should!) favor material and concrete progress in specific domains, whether housing, logistics, and energy, or healthcare, longevity, and treatment of disease, but recognize that a few, like nuclear and bioweapons development, are radically and existentially dangerous. And given that, an unconstrained push for the most radical general purpose technology of all isn’t something that the least cautious, most risk-loving segments of society should be able to pursue unilaterally.

2

New Comment