Philip Skogsberg

Entrepreneur and curious generalist. Trying to understand how stuff works.


Wiki Contributions


The Engine of Progress: Why Growth Matters

Thank you for the comment and words of encouragement. The work the progress studies movement is doing is important and something a lot more people need to hear and internalize

Radical Energy Abundance

Interesting and thought-provoking, great read! 

I read recently on some substack that a lot of the gains in cheaper Solar power are actually a result of the industry being highly subsidized, much more than other forms of energy - presumably indicating that it's an unsustainable form of growth we can't expect to continue were it to be pitted against e.g Nuclear in a free(er) market(?). 

Does this change your conclusions in any way? What am I missing? Also, how do you view the future of nuclear (fission) energy in light of the potential of solar?

Wizards and prophets of AI [draft for comment]

What could a superintelligence really do? The prophets’ answer seems to be “pretty much anything.” Any sci-fi scenario you can imagine, like “diamondoid bacteria that infect all humans, then simultaneously release botulinum toxin.” In this view, as intelligence increases without limit, it approaches omnipotence. But this is not at all obvious to me.

The idea of creating ASI as an omnipotent being, far superior and all-knowing, strikes me as a pseudo-religious argument wrapped in technical and rational language that makes it palatable to atheists. It's a bit like how the wildest predictions from longevity/curing aging feel a bit like heaven for people who don't believe in god.

I get how to get from ANI to AGI and then to ASI. It makes sense. But at the same time, something about it doesn't. Perhaps this is why this position (AGI as a harbinger for extinction) lacks mainstream appeal. 

If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.

Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?

Addressing any remaining LLM skepticism

What do you make of  Eli Dourado's take that AI probably isn't a transformative/revolutionary tech because of regulations and the lack of innovation in physical meat-space etc? ("We wanted flying cars, and all we got was 140 characters"-kind of thing)

AMA: Matt Clancy, Open Philanthropy

What role do government and politics play in supporting scientific innovation (coming from academia)?

There are good/interesting arguments on both sides of "governments have nothing to do with progress in science vs look at all the useful things we've got from the NASA space program", etc. Where do you fall?

Peter Thiel’s Pessimism Is (Largely) Mistaken

I'm largely sympathetic to this viewpoint, and the evidence seems clear-cut. Nevertheless, what I think people like Theil are alluding to, along with J. Storrs Hall in Where is my flying car, is that we could have had so much more progress (including flying cars, nuclear fusion, and supersonic transportation) if it weren't for some combination of regulations, communism, wokeness and "ergophobia" or environmental romanticism, etc. I'm broadly sympathetic to this view too. 

Seems to me that at some level, it's true that we've had a lot of progress on a lot of important metrics, but also not as much progress as we really could have had in the world of atoms.

P.S, writing this from Sweden, where we're now seeing record high electricity costs (as much of the rest of Europe). What if we had invested more aggressively in nuclear power during the last 20 years rather than having started to shut down our plants?