I.

The idea of creating a future that is favorable for our descendants is in vogue today.

But before even thinking about creating an ideal future, we must address a more fundamental problem about the limits of what is knowable.

And that is the impact of the growth of knowledge. 

All knowledge is conjectural and our theories and their predictions are 100% error-prone. That doesn’t mean they tell us nothing about the world. But they are not infallible.

Scientific knowledge is predictive knowledge. This defining characteristic of science allows mistaken theories to be corrected upon making false predictions. This, in turn, allows for an improvement in our ability to predict. 

But the future ideas and actions of people are physically impossible to predict, since both depend on the growth of human knowledge.

Since the future depends on the content of knowledge yet to be created, content that cannot possibly be known today, we will never understand what future people will want. For if there were a method to predict some piece of knowledge that is only going to be discovered next year, then by using that method we will have gained that knowledge today. A contradiction.  

II.

We ought to separate a prediction from a prophecy here. A prediction is a logical consequence of a scientific theory where we can explain why human choice will have no impact. But if one tries to guess at times when knowledge creation will have an impact, they are attempting prophecy. 

A prophecy does not become a prediction if an “expert” makes it using “science”. For example, we know from our best scientific theories that the Sun will continue to shine for another 5 billion years or so, after which it will have utilized its fuel and turn into a red giant star. This would be doom for any life on the Earth, as the Sun would engulf and destroy its neighboring planets. Or would it? If any of our descendants decide to stay on the Earth at that time, they might do everything in their power to prevent it. Of course, our technology today would prevent us from pulling off such a feat, nor is it inevitable that our descendants will overcome such a challenge.

“The color of the Sun ten billion years hence depends on gravity and radiation pressure, on convection and nucleosynthesis. It does not depend at all on

the geology of Venus, the chemistry of Jupiter, or the pattern of craters on the Moon. But it does depend on what happens to intelligent life on the planet Earth. It depends on politics and economics and the outcomes of wars. It depends on what people do: what decisions they make, what problems they solve, what values they adopt, and on how they behave towards their children.”

— David Deutsch, The Fabric of Reality, Chapter 8: The Significance of Life

Given that the future of humanity is unknowable, what ought we do in order to create a favorable future for our descendants (if anything at all)?

III.

Our inability to predict the growth of knowledge is the only impediment in our ability to predict the future. And hence, we do not know what future people will want.

This begs some serious considerations for moral philosophies of altruism, such as that of “longtermism” as expressed by Will McAskill in his popular book, What We Owe the Future.

Altruists adopt a view that morality is essentially about treating the interests of others. McAskill and the longtermist add future people to this calculation arguing that those lives that do not yet exist also matter.

Such a morality necessarily implies that people that exist today need to sacrifice for the people of tomorrow. For example, they argue that we have to ensure that the climate is maintained for fear that the future people are going to live in a world that is worse off. This argument amounts to calling for restrictions on which kinds of knowledge people are allowed to create and act upon in the present.

There are many problems with such a tragic view of morality. Firstly, longtermism does not fully take progress in moral understanding into account. An assumption is made that the values of future generations will be the same as those in the present. But people are fallible: their moral knowledge is laden with errors just as their scientific knowledge is. We ought to hope that the morality of our descendants is utterly alien to our own because it may be better than our own.

If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn't possibly have included that. More generally, longtermism can't take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations. They are time imperialists.

Another issue is that if altruistic morality is taken to its logical conclusion, then everyone would be trying to solve everyone else's problems. How could that possibly be more effective than everyone trying to solve their own problems?

The notion that sacrifice is good is pervasive. But caring for others rather than for yourself creates more problems than it solves. Morality is the question about “What should be done next?” not “Who should be helped next?”

If we are here to help others, what on Earth are the others here for?

IV.

What we actually need to be is selfish, not altruistic. We need to make as rapid progress as possible so that the people of the future themselves will be at a starting point where they can make even more rapid progress. 

The first line in that previous paragraph can really put off some people. When a philosophy is latent with moralistic language it gets hard to error-correct. This is because those who espouse said ideas can tend to presume they’re morally superior to those who disagree with them. “Oh, you don’t want to slow down progress? That means you don’t care about the lives of future people.” 

Usually, people who have such a view of morality are essentially religious fanatics but aren’t using the term religion. They’ve created a modern religion in which man is cast as the devil, and there is no God and no savior. They’ve obviously removed the traditional trappings around it. But they still hold on to the same underlying core, moralizing beliefs of religion. 

They forget that moralizing itself is no argument

V. 

Selfishness is not callousness and altruism is neither kindness nor generosity. Altruism is subordinating one's own preferences to those of others. It’s a zero-sum game. It's not win-win. Selfishness is being concerned about myself, precisely because I am a good person. In being concerned about myself, my welfare, my wealth, and my happiness, I become the kind of person to help others, not at a cost to myself, but rather by being involved in win-win relationships. That's the key. With selfishness, I want to win, but not so you lose. That's callousness. I am selfish so someone else can win, too.

People who focus on themselves and their own problems make faster progress than those who aspire to “do what's right” despite experiencing internal resistance to do so. Those who choose careers in order to have a positive impact on the world, even when a part of them desperately wishes that they do otherwise, will struggle to make progress. And ironically, such choices cause more suffering in the present (namely, that of the altruist).

We need to solve problems that genuinely interest us in order to make progress as fast as possible. It’s the best thing for everyone—including those yet to exist in this world.

Taking wealth away from where progress is happening fastest and gifting it to where it’s not is going to hurt more people than it ever helps.

0

2 comments, sorted by Click to highlight new comments since: Today at 5:57 AM
New Comment

Most of the EA longtermist arguments are about future people existing at all. If there's an extinction event, there will be no future people with complex values. 

If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn't possibly have included that. More generally, longtermism can't take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations.

It is true that we can't predict future moral knowledge. However.

  1. An intervention by someone from that time period that helps modern whites and doesn't harm modern blacks would still be seen as better than doing nothing from the point of view of most people. (excluding the woke fringe) Most random interventions selected to help future white people are unlikely to cause significant net harm to blacks. 
  2. If their intervention is ensuring that we are wealthy and knowledgeable, and hence more able to do whatever it is we value, then that intervention would take into account progress and moral knowledge.
  3. In reality, you have to choose to do something. When making decisions that effect future generations, either you impose your current values, or you try to give them as much flexible power to allow moral knowledge, or you basically pretend they don't exist. 

This is an intresting new combination of standard mistakes. 

Another issue is that if altruistic morality is taken to its logical conclusion, then everyone would be trying to solve everyone else's problems. How could that possibly be more effective than everyone trying to solve their own problems?

Altruistic morality in the total utilitarian sense would recognize that solving everyones problems is equally valuable, including our own. In the current world, practically no humans are going to put themselves lower than everyone else, and most of the best opportunities for aultruism are helping others. But in the hypothetical utopia land, people would solve their own problems, there being no more pressing problems to solve. 

If we are here to help others, what on Earth are the others here for?

Well imagine the ideal end goal, if we develop some magic tech. Everyone living in some sort of utopia. At this point, most of the aultruists say that there is no one in the world who really needs helping, and just enjoy the utopia. But until then, they help. 

What we actually need to be is selfish, not altruistic. We need to make as rapid progress as possible so that the people of the future themselves will be at a starting point where they can make even more rapid progress. 

A aulturist argument for selfishness. You are arguing that selfishness is good because it benefits future people. 

If you were actually selfish, you would be arguing that selfishness is good because it makes you happy, and screw those future people, who cares about them.

I also don't know where you got the idea that selfish=max progress. 

Suppose I am a genius fusion researcher. (I'm not) I build fusion reactors so future people will have abundant clean energy. If I was selfish, I would play video games all day. 

Altruism is subordinating one's own preferences to those of others. It’s a zero-sum game. It's not win-win.

In the ideal utilitarian hypothetical utopia, who exactly is loosing. If hypothetically everyone had the exact same goal, the well being of humanity as a whole, valuing their own well being at exactly the same level as everyone elses, that would be a 0 difference game, the exact opposite of a 0 sum game.