How can we measure the extent to which academics change their research (either whole projects, or specifics about their research projects) in order to better appeal to grantmakers / future tenure track positions / journal reviewers / etc?
Here I'm referencing the Keynesian Beauty Contest as an allegory mostly. Researchers aren't incentivized to do the work they think as best -- the incentive is slightly different, and is for "what will the people who will judge your research in the future think is best".
My assumption in asking the question is that this difference in incentives leads to biasing in what science gets done.
I think this is implicitly measured against the alternative of which research direction they would pick if they had full reign to choose their research direction. (Assuming this motivation differs between researchers, I'm assuming there's at least some components of impact, long term value, or altruism.)
It seems like it would be good to control for, e.g., access to resources (so neither option unlocks vastly more funding), but maybe this would be an important part of it.
In Part 1, I laid out the simmer scenario as an alternative to explosion, stagnation, and collapse, in which growth in the coming millennia slows down but remains significant.
In this post, I want to suggest that if our intuitions contradict the plausibility of large long-run growth rates, that’s so much the worse for our intuitions.
A large rate of compound economic growth means that pretty soon, we’ll have to support multiple current-world-economies per atom in the galaxy. Holden explains why he thinks this implausible:
What would it mean, though, to value a single experience 10^71 times as much as today’s entire world economy? One way of thinking about it might be:
“A 1 in 10^71 chance of this thing being experienced would be as valuable as all
Deeply inspired by Roots Of Progress, I have written a series of essays that discuss science and technology progress from an Eastern perspective. They mix history, technology, and philosophy into storytelling.
Would love your feedback on this one. Hopefully, it gives me the confidence to share the others :)
During the winter of 1931, Polish engineer Maurice Frydman (Maurycy Frydman-Mor), was in Paris in search of work during the Great Depression. He was at the Gare de Lyon railway station when he saw a large crowd gathering. A train pulled in, and a short half-naked man “all luminous and shiny as burnished gold” got down the train. The police were trying to control...
The world can't just keep growing at this rate indefinitely. We should be ready for other possibilities: stagnation (growth slows or ends), explosion (growth accelerates even more, before hitting its limits), and collapse (some disaster levels the economy).
There’s a fourth possibility worth taking seriously: the simmer scenario, where growth slows down, but stays consistent and significant for the foreseeable future.
Here, growth slows to 0.2 - 0.5% a year (from today’s 2% annual growth rate) for the duration of time we are stuck in the Milky Way (25,000 years). After that, we grow as fast as we can expand to other galaxies.
In this post, I...
Zvi recently posted a (critical) list of core assumptions of effective altruism.
The list is interesting, but I think much of it is somewhere between “a bit off” and “clearly inaccurate”.
In this post I redraft the list, applying a round of suggested edits.
Compared to Zvi's list, mine is somewhat aspirational, but I also think it's a more accurate description of the current reality of effective altruism (as body of ideas, and as community).
Important: these are just my takes! I'm not speaking on behalf of current or past employers, key figures in the movement, or anything like that.
This list is not intended to be comprehensive.
I'd love to read your thoughts—including your own suggested edits and additions—in the comments. If you like, make a copy of my Google Doc!
I spend a lot of time in Effective Altruism related spaces and have been thinking about how some recent trends in the EA world may impact Progress Studies and its development. In the last few years, the EA community started bifurcating into two distinct groups: long-termists and near-termists.
The primary difference between the groups rests on a difference in philosophical worldview. Long-termists are often a variation of total utilitarians who believe that those living today, 100 or 100,000 years in the future all deserve equal moral weight and concern. This is to say there is no temporal discount rate of lives; they are all equal, no matter when one is born (near-termists generally are not total utilitarians and discount future lives greatly)
I was thinking about how this may...
Patrick Collison and Tyler Cowen opened their 2019 Atlantic piece that helped jump-start the progress studies movement with the following passage:
In 1861, the American scientist and educator William Barton Rogers published a manifesto calling for a new kind of research institution. Recognizing the “daily increasing proofs of the happy influence of scientific culture on the industry and the civilization of the nations,” and the growing importance of what he called “Industrial Arts,” he proposed a new organization dedicated to practical knowledge. He named it the Massachusetts Institute of Technology.
In my eyes, MIT is entirely deserving of this honor: being used as the authors’ first example of an organization that generated progress. Yet, despite how well-known this article has become and MIT’s prominent placement in it,...