Hi everyone!

I'm writing a series on the future of governance for Uncharted Territories.  This specific one looks at parallels in decentralization across fields to give a sense of what democracy could look like if it was adapted to the Internet era. I'd love your feedback before I publish! Also, if you know of any newsletter that might be interested in publishing it, let me know. Here it the article:


Fishes don’t realize they’re in water.

We don’t realize what alternatives to democracy will emerge because we’re submerged in the current system.

Democracy is the worst form of Government except for all those other forms that have been tried.—Winston Churchill

When you look at ideas to improve democracy, you find things like alternative ways to vote for your leaders or delegating your vote altogether. These are nice ideas, and can change the party in government like they recently did in Australia. But they’re superficial. The Internet is a bulldozer. It will uproot democracy and grow something new from scratch. To understand what will blossom in its place, we need to reprogram our brain first. We’re too used to the current system to realize there are alternatives.
 

Better Decision Systems

Field Marshal von Moltke

Gentlemen, I demand that your divisions completely cross the German borders, completely cross the Belgian borders and completely cross the river Meuse. I don’t care how you do it, that’s completely up to you.―Oberst Kurt Zeitzler, Chief of Staff Panzergruppe Kleist, 13 May 1940

Napoleon shattered and humiliated Prussia’s army in 1806. For decades, Prussians would obsess about it. What went wrong? What should we have done differently? How can we make sure this never happens again? What Prussia discovered led them from victory to victory for nearly a century, until it conquered nearly all of Europe. 

It would take six decades after Napoleon humiliated them for Prussia’s army to face another battle, but when it did, it obliterated the enemy. It was Germany’s[1] turn to conquer Paris. 

 

Entry of the German troops into Paris at the Arc de Triomphe, on March 1, 1871, at the end of the Franco-German war.

The world had changed. Mass conscription drafted hundreds of thousands of soldiers to serve in monstrous armies. Trains could move them quickly across large distances. Telegraphs could send new information from the front almost instantaneously. New rapid-fire guns and artillery required much faster and organized logistics to supply their ammo. The head of Prussia’s army, Field Marshall von Moltke, realized the ramifications of these developments: you can’t tell everybody what to do anymore. There are too many decisions to make. You need more people making independent decisions, and those people need to be closer to the action to make these decisions. 

Field Marshall von Moltke the Elder

But if leaders on the ground are making independent decisions, how can you ensure they are all fighting together, towards the same goal? The answer von Motlke came up with was Mission Command.

While Napoleon controlled and directed all his forces in battle from a central headquarters behind the front lines, von Moltke gave each leader very clear missions, but didn’t tell them how to achieve them. These leaders, in turn, were meant to do the same with their subordinates.

The result was a much more dynamic system where large numbers of people could be efficiently coordinated without a central planner telling everybody what to do. They just gave them their operating principles. In the wake of World War II, NATO in general and the US in particular studied these principles and adopted them. To this day, mission command is the US Army’s main approach[2], and its principles are taught in MBA programs across the world.

In other words, when there are vast numbers of people to coordinate, and they have access to most of the information about the problem they must solve, decision-making should not be centralized. A better information management system is to give people close to the problem the mission—the why and the what—, give them the right incentives, and let them figure out how to achieve the mission.

Crucially, pulling that off a thousand years, or even a century, earlier would have been hard or impossible. No railways or telegraph, no way to make this new information technology work. Decentralization was enabled by faster communications.

 

Capitalism

A century before von Moltke, Adam Smith wrote in The Wealth of Nations about what he called the invisible hand, a concept that's fundamentally the same as von Moltke's mission command.

In visible hands

Horror stories from communist countries are akin to those from command and control militaries. 

Production managers frequently met their output goals in ways that were logical within the bureaucratic system of incentives, but bizarre in their results. If the success of a nail factory's output was determined solely by numbers, it would produce extraordinary numbers of pinlike nails; if by weight, smaller numbers of very heavy nails. One Soviet shoe factory manufactured 100,000 pairs of shoes for young boys instead of more useful men's shoes in a range of sizes because doing so allowed them to make more shoes from the allotted leather and receive a performance bonus.Social Problems in a Free Society: Myths, Absurdities, and Realities, by Myles J. Kelleher

"Who needs such a nail?"   "It’s rubbish! What matters is that we immediately executed the plan."

My company, Ankorstore, is a marketplace with over one million products, which represents a microscopic part of all the goods and services available in the world. If there are millions or billions of products in the world, how is a central group of people meant to figure them all out? How do they know the right features, the right prices, the right marketing, the right explanation to give to customers? They can’t.

Capitalism is an information technology: It’s a way for people to decide who gets what. It decentralizes decision-making by pushing it down to individuals. Each person has to figure out the best way to sell something to others, which means they have to deeply understand the needs of their customers to serve them well. They become experts in the microproblem they’re solving, so they solve it much better than a bureaucrat ever could.

Capitalism also comes with a clear mission and the right incentive: If you make money, you can keep it. The more you help others, the richer you become. 

Decentralization is harder to pull off than centralization, but if you figure out the right rules, it’s orders of magnitude better.

 

Ants

From my article You’re a Neuron:

How do ants communicate with each other? They have some sound and touch, but the main communication is through chemical signals. A bit like cells.

Single ants are pretty dumb and weak, but the colony is very resilient and shows complex behavior. The complexity emerges from the interaction between the ants. That interaction is decentralized: there’s not one ant or even a group of ants telling the rest what to do. The complexity emerges from the simple interaction behaviors between the simpler ants.

So you can see an ant as an animal, or you can see it as a cell and the animal is instead the colony. The only difference with you and me is that the colony’s cells are roaming around, while yours are all contained within the body.

 

Your Brain

The ultimate decentralized decision-maker.

Do you think somewhere, inside your brain, there’s a part of it that is making most decisions? It certainly feels like it, but is it true? Is you somewhere inside?

If that’s the case, let’s call that thing your homunculus. What part of the homunculus makes the decisions?

 

Source: Infinite Regress of Homunculus, Jennifer Garcia

No, in brains, no neuron is in charge. They all send very simple messages to each other, whether they are fired up or not. Each neuron then sends that message out, and a bunch of other neurons listen in. If the signals coming in from other neurons are strong enough, then it will fire up too and propagate the message. That’s it. 

But from that simplicity emerge sub-groups of neurons that act together. These sub-systems are connected with other sub-systems, and coordinate their firing up. The messages travel to higher- and higher-level systems in the brain, until a final decision is made.

So connections between neurons are decentralized information technologies that enable the most advanced intelligence we know to emerge from very basic building blocks.

Elon’s Twitter

Twitter works in a very similar way.

In brains, individual neurons are pretty dumb. Intelligence emerges from the whole.
It comes from how neurons are connected. 

On Twitter, we're the neurons—each of us is pretty dumb. Society-level decisions emerge from the whole of people’s interactions.

In brains, each neuron sends to a bunch of other neurons very simple messages called action potentials: they fire up or not.
On Twitter, we send pretty simple messages to other neurons (people): 280-character tweets.

In brains, these messages travel through axons. Other neurons catch them through their dendrites.
On Twitter, your axon is your tweet editor. Your dendrite is your feed.

In brains, if the message is strong enough, the receiving neuron will in turn fire up & propagate the message.
On Twitter, if you want to endorse a message, instead of propagating it by firing up, you retweet it.

In brains, the more a neuron pays attention to another neuron, the more likely it is to propagate its message. Even if it doesn’t pay that much attention to a neuron, if it receives the same message from many other neurons, it will propagate it too.

On Twitter, you're more likely to retweet something from somebody you pay attention to, or if many have retweeted it already.

In a brain, the more a certain pathway between neurons fires up, the stronger it will become. The axons and dendrites strengthen. Eventually, some groups of neurons start firing up very close to each other, following each other's behavior. They form a brain subsystem.

On Twitter, the more you like somebody's tweets, the more you're likely to follow them. You'll tend to form groups with the ppl who follow the same ppl and topics & have the same opinions. You'll RT each other. You form a Twitter subsystem.

In the brain, decisions emerge from the interaction of these low-level subsystems into bigger and bigger systems.

On Twitter, the most viral tweets become trending topics. These trending topics might become themes, and these can become movements, the way it happened with Black Lives Matter. So low-level subsystems can become bigger and bigger until they shape society's thoughts—and decisions.

 

Many social networks work similarly, but Twitter is the closest to a brain because it has:

• 1-way following (vs befriending), like neurons

• Very short messages, like action potentials

• Text-first network = ideas-first network

• Owns the attention of decision-makers

In other words, decisions on what to think used to be taken behind closed doors, in the offices of the big media outlets and their ties with governments. Not anymore. Now, they have a competitor in social media, a decentralized information communication tool that standardizes the communication between dumb nodes[3]. I believe this is how Elon Musk thinks about Twitter, and why he’s interested in it.

Note that this would have been impossible even two decades before Twitter was founded. A lot of communication technologies had to be created for Twitter to work at all.

Neural Networks

For decades, software decisions were “centralized”: a human brain—or a handful of them—thinks how to solve a problem, and dictates very precisely how it will be solved.

Then came neural networks.

Nodes in this network (called neurons) connect to each other in layers, and the signal from neurons in each layer goes to the neurons in the next. Some signals go back. Eventually, at the end of the neural network, you have an answer.

Machine Learning (ML) engineers don’t know what weight each of the neurons has. They just design the network, and it’s the network that figures out the answer.

Now, some of these AIs can play and win in hundreds of games without any specific training.


ML engineers are still figuring out how to make them work optimally, but it is currently working so well that many people believe that AI will surpass humans in intelligence at some point this century.

So neural networks are an information technology that replaces an intelligent and centralized decision maker with a vastly superior decentralized decision-making process.

Note that this was impossible a few years ago: engineers tried, but they simply didn’t have enough processing power and data to make these models work. New technology enabled this new decentralized decision-making system.

 

Wikipedia

These are workers at the Encyclopedia Britannica: a bunch of BAs and MScs and PhDs in their office, analyzing data, discussing with colleagues, consulting reference books, probably calling some experts. After that serious work, they then wrote articles, published them, and that’s how we got our encyclopedias in the 20th century.

Then came Wikipedia.


 

At the beginning, people laughed: The average joe is clueless! Your articles will be full of errors!

They completely missed what was coming at them.

Wikipedia turned out to be vastly superior to traditional encyclopedias:

  • It gathered millions of articles instead of the typical thousands in an encyclopedia.
  • They were updated at lightning speed.
  • And, surprisingly, they were more accurate than those of encyclopedias!

Analysts predicted the first two points, but they belittled them. What they completely missed is the third point. How could a bunch of randos from the Internet end up with more accuracy than a group of highly educated professionals?

Yes, on average each one of these workers was very educated. But none of them was a global expert on anything. None of them was the most knowledgeable polymath in the world. And they were few, limited by their 40 hours a week. With 1000 workers, that’s just 40k hours a week.

Compare that with the billions of people on earth. Some of them might only dedicate five minutes every couple of months. Others will dedicate their entire lives to their passion for making Wikipedia better. But what all have in common is that every human is a worldwide expert in something. Put them all together, and you’ll get the edgiest knowledge of the world under one roof.

The result is that there are more articles, updated more quickly, and with a higher average quality.

It’s a bit like portfolios: if you only have a few stocks, you will have their average return, and a slightly better risk than any portfolio independently. But if you have all the stocks in the world, you will get the best risk-adjusted reward [4].

If you think about it, the world’s citizens possess much more knowledge than a small group of individuals. What was lacking before Wikipedia was a mechanism to harness that knowledge. That was the technological breakthrough of Wikipedia: how the editors work together to find a consensus on what the articles say [5].

So Wikipedia is an information technology that decentralizes decision-making for encyclopedia articles. Its core innovation is a mechanism that allows some of the most informed people on any topic to contribute to the articles. The result is a much better result than if these decisions were centralized.

Note that this was impossible a decade earlier. New technologies made it possible.

 

Open Source

When we say Wikipedia, we can say open-source software.

From Slime Mold Time Mold’s Every Bug is Shallow if One of Your Readers is an Entomologist:

Back in the day, people “knew” that the way to write good software was to assemble an elite team of expert coders and plan things out carefully from the very beginning. But instead of doing that, Linus just started working, put his code out on the internet, and took part-time help from whoever decided to drop by. Everyone was very surprised when this approach ended up putting out a solid operating system. The success has pretty much continued without stopping — Android is based on Linux, and over 90% of servers today run a Linux OS.

Before Linux, most people thought software had to be meticulously designed and implemented by a team of specialists, who could make sure all the parts came together properly, like a cathedral. But Linus showed that software could be created by inviting everyone to show up at roughly the same time and place and just letting them do their own thing, like an open-air market, a bazaar.

Linus Torvalds

Open source software has mechanisms to separate small pieces of the code for other developers to build them, and then mechanisms to merge that code back with the rest of it. Put in another way, open-source software development is an information technology that is more efficient than centralized development. It’s achieved that by finding good mechanisms to decentralize the development.

Note that this was impossible a decade earlier. New technologies made it possible.

Peer Reviews

We can go one level higher: when we say Wikipedia, or the open source movement, we can really just say peer reviews. They’re the same.

Peer reviews in science work reasonably well: get a handful of scientists in your field to review your work, and if they think it makes sense, it becomes so much stronger.

But it goes beyond academia. Wikipedia is peer review. Open source is peer review. When you send a draft to a handful of friends to get feedback, that’s peer review. When you publish something to thousands of people to farm for dissent, that’s peer review.
 

Source


The more people see something, the more likely one of them is suited to identify a mistake, or to come up with a solution. If we slightly simplify Slime Mold Time Mold’s Linus’s Law [6]:

Given enough eyeballs, all problems are obvious.

I see it with my articles or on Twitter. I improve much faster by publishing something that is 90% there than by waiting to get it to 99% accuracy. Going from 90% to 99% is extremely expensive, whereas going out with 90% guarantees that a reader will correct you if you’re wrong.

Mechanisms for peer reviews vary, but in general they all achieve a better result than if a centralized group of scientists came up with theories and reviewed them themselves. Peer reviewing is an information technology that makes understanding the world more efficient through decentralization mechanisms.

Note that peer reviews like correcting my articles were nearly impossible just a few years ago, since without social media or Substack it would have been very hard to gather an audience to check my thoughts.

 

Prediction Markets

You can’t beat the stock market.

Only 8% of expert-run funds beat the stock [7].

From Merchants of Risk:

In 1904, the statistician Francis Galton was in a county fair when he stumbled upon a competition: who could guess the weight of an ox? After the competition, he was allowed to look at all the 800 submissions, and was shocked to notice that their average was just one pound away from the true weight of the ox—1,197 lbs instead of the true 1,198 lbs—and a better guess than most submissions individually. The Wisdom of the Crowds has been famous ever since.

Why did that work? Every guesser came at the problem with different knowledge and biases. Some might remember the weight of that ox from the previous year, others might have been able to see the ox from all angles, others could interpret the weight of its footprints in the mud, others might be butchers used to guessing weights, others might have seen the ox was slightly bigger than another one that weighed 1,100 lbs…

Separately, they only hold a bit of information, but together they have a lot more. Conversely, their biases go in all directions, so they cancel out when aggregating the answers. The result is an accumulation of all the good info and a cancellation of all the bad, resulting in an accurate guess.


Stock markets work the same way: most participants don’t know the true value of a company, but in most cases, the bets of millions of people result in a very accurate price, incorporating new information at the speed of light. 

You can use the same principles for yourself with something called “Prediction Markets”, which have been proven to work very well. Thankfully, you don’t need to set up a full prediction market to benefit from this: the more people you ask to predict something, the more accurate the prediction will be.

Metaculus is a good example of a prediction market outside of the stock market. Want to know how bad the monkeypox is likely to be?

The graph on the left is the evolution of the predictions over time. You can see the combination of predictions leads to a prediction of ~12k deaths. The graph on the right shows how much each person is betting. It’s the distribution of the bets. You can see the peak happens between 5k and 20k deaths, but there’s a sizable amount of people predicting up to 2M deaths. On the right, you have the people who follow the prediction, the number of predictions, and when the question will be resolved.

On the left, you can see that the community thinks it’s likely to stabilize at 12k, so it doesn’t think this is too dangerous. Note on the right that there’s a sizable amount of people who claim between 100k and 2M infections, so maybe it’s worth having a look?

More interested in demographics?

In a snapshot, you get a ton of information that would be hard to get by looking alone at the data. This is what prediction markets are, then: an information technology that decentralizes the analysis and prediction of what will happen in the future, and thus can decentralize decision-making in a way that could probably be more successful than if a centralized group made these decisions.

The Democracy of the Future

Democracy appears decentralized because people can vote. It’s not. Look at this room:

How are laws written?

They’re frequently written by a bunch of highly educated people, in their office, analyzing data, discussing with colleagues, consulting reference sources, calling some experts. With that serious work, then they write articles, publish them, and that’s how we still get our laws in the 21st century. 

In other words: the legislative process is like Encyclopedia Britannica before Wikipedia appeared. Citizens have much more time and knowledge than all the lawmakers in the world combined. What we’re missing today is a mechanism to harness the time and knowledge of broader society to make laws. We’re missing the Wikipedia mechanism innovation, applied to law-making.

  1. In the military, that mechanism is mission command: tell your subordinates very clearly what the mission is, and give them information and autonomy to make the decisions themselves.
  2. In the economy, that mechanism is capitalism: people benefit when they benefit others. If you let them free to figure out what others want—and keep most of that value—they will do it.
  3. In ant colonies, that mechanism is the chemical communication between ants. That way, each ant can influence the behavior of another ant in a way that benefits the colony.
  4. In the brain, that mechanism is the neural networks: the action potentials that flow through the system of axons and dendrites that connect neurons.
  5. In Twitter, that mechanism is tweets and retweets.
  6. In artificial intelligence, that system is neural networks and all the innovations around them: gradient descent, backpropagation, etc.
  7. In Wikipedia, that mechanism is edit rights: what allows anybody to edit an article, but when there’s conflict, more and more senior members debate and vote.
  8. In open source, that mechanism is the process to branch out and merge branches back into the source code, which allows hundreds (or thousands) or developers to work on the same code base in parallel.
  9. In peer reviews, that mechanism might be as simple as asking for feedback.
  10. In forecasting, that mechanism is prediction markets: ask people to bet on how a question will be resolved.


With all these examples, it’s obvious that the future of democracy won’t be a better voting system. It will be a system that allows everybody to participate. Anybody can contribute their analysis of problems or suggest a policy that would solve them. The laws that rule the community will be approved directly by the community. It’s a matter of time. Somebody will come up with that mechanism in the not-too-distant future, lawmaking will become fast, prolific, and intelligent, and we won’t all be angry all the time at the terrible politicians we’ve had to elect.

What will this mechanism look like? There’s been a recent explosion in the exploration of these systems, mostly through DAOs—Decentralized Autonomous Organizations. We will explore what these mechanisms might look like in democracy in the coming articles.
 

  1. ^

    Germany unified during the siege of Paris! So Prussia started the siege, but Germany conquered Paris.

  2. ^

    Unlike Russia’s, which is command and control, and look where that’s getting them.

  3. ^

    The result is not perfect, but we perceive it as much worse than it actually is because of all the strife. Before, we didn’t fight as much because we all repeated the Gospel we were told by the broadcasting media. Now, society “thinks” in a more distributed way, and that allows it to think much faster, and about many more topics in parallel. It’s not always right—but we’ll get to that in the next article.

  4. ^

    It’s not the same, because in one case you have random risk and the other you have knowledge heterogeneity. And in the case of Wikipedia you can also have a better average article, because you don’t get the average expertise of every person (vs. the average return of every stock), but rather only the best.

  5. ^

    I don’t think that mechanism is optimal: quantity of contribution and seniority matter more than insight, some rules are very rigid, some obscure articles get debates between clueless editors... but it’s a reasonable one, as results can attest.

  6. ^

    Slight modification of Slime Mold Time Mold’s Linus’ Law: “Given enough eyeballs, all bugs are shallow.

  7. ^

    Large-cap funds is the relevant comparison because this is against the S&P500. Small caps likely had a better performance, but that’s because they’re much riskier, not because they’re better managed.

4

3 comments, sorted by Click to highlight new comments since: Today at 5:20 PM
New Comment

This is interesting, but I really do want to see what ideas you have for mechanism.

The problem with law—that doesn't apply to, say, Wikipedia pages—is that if you create a bad one you can do a lot of damage to a lot of people. So our mechanisms for making law are deliberately inefficient. They are the opposite of permissionless innovation.

If we want to enable anyone to make law, and have it be really fast and efficient and low-friction, it can't be the kind of law that constrains the freedom of an entire population. It has to be something else.

You have a good point that, historically, speed has had some correlation with legislation quality. But that's just a failure of the mechanisms.

It's like saying that communism works better than capitalism because if you create a bad economy you can damage a lot of people, so our mechanisms for organizing the economy should be deliberately inefficient. Capitalism achieved an economy that is really fast and efficient and low-friction.

But I agree this is a moot debate until the mechanisms are discussed. I'll do that in the future.

I'm not even sure that I would say that speed has a correlation with quality in legislation. It's more that adding process, and especially requiring review and broad agreement, helps avoid some of the worst outcomes.

The analogy to an economy doesn't hold: if someone creates a bad business, you can choose not to patronize it; if someone creates a bad law, you can't choose not to follow it.

… Unless you are in a choice-of-law regime, e.g., the way a new business can choose what state to incorporate in, and is governed by the corporate law of that state; or the way a merchant ship can choose what flag to fly under.

Maybe you are going to propose that kind of system? Looking forward to future posts that get into the mechanisms!