All of Philip Skogsberg's Comments + Replies

The Engine of Progress: Why Growth Matters

Thank you for the comment and words of encouragement. The work the progress studies movement is doing is important and something a lot more people need to hear and internalize

Radical Energy Abundance

Interesting and thought-provoking, great read! 

I read recently on some substack that a lot of the gains in cheaper Solar power are actually a result of the industry being highly subsidized, much more than other forms of energy - presumably indicating that it's an unsustainable form of growth we can't expect to continue were it to be pitted against e.g Nuclear in a free(er) market(?). 

Does this change your conclusions in any way? What am I missing? Also, how do you view the future of nuclear (fission) energy in light of the potential of solar?

2Casey Handmer1yAll forms of energy have all kinds of subsidies, and Nuclear probably more than any other on a "per kWh generated" basis. Nuclear fission is awesome technology but will it be able to compete with solar on price - seems unlikely to me.
Wizards and prophets of AI [draft for comment]

What could a superintelligence really do? The prophets’ answer seems to be “pretty much anything.” Any sci-fi scenario you can imagine, like “diamondoid bacteria that infect all humans, then simultaneously release botulinum toxin.” In this view, as intelligence increases without limit, it approaches omnipotence. But this is not at all obvious to me.


The idea of creating ASI as an omnipotent being, far superior and all-knowing, strikes me as a pseudo-religious argument wrapped in technical and rational language that makes it palatable to atheists. It's a bit... (read more)

2niplav2yAs someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing [https://arbital.com/p/psychologizing/]. I think there is an argument for TAI [https://arxiv.org/pdf/1912.00747v1.pdf] x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it. Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict, it is very clear who would win. So either one would need to believe that current humanity is very near the ceiling of capability, or that we are not able to create more capable beings. (Which, in narrow domains, has turned out false, and the range of those domains appear to be expanding [https://www.lesswrong.com/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works]). I claim this is not so outlandish, the current US would win against the 13th century 1000/1000 times. And here's [https://gwern.net/fiction/clippy] a fairly fine-grained scenario detailing how that could happen with a single agent trapped on the cloud. But—it need not be that strict a framing. Humanity losing control might look much [https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic#The_Production_Web__v_1a__management_first_] more [https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like] prosaic [https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story] : We integrate AI systems into the economy, which then over time glides out of our control. In general, when considering what AI systems will act like, I try to simulate the actions of a plan-evaluatior [https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX/p/fopZesxLCGAXqqaPv], perhaps an outlandishly powerful one [https://en.wikipedia.org/wiki/AIXI]. Edit: Tried to make this comment less sn
Addressing any remaining LLM skepticism

What do you make of  Eli Dourado's take that AI probably isn't a transformative/revolutionary tech because of regulations and the lack of innovation in physical meat-space etc? ("We wanted flying cars, and all we got was 140 characters"-kind of thing)

AMA: Matt Clancy, Open Philanthropy

What role do government and politics play in supporting scientific innovation (coming from academia)?

There are good/interesting arguments on both sides of "governments have nothing to do with progress in science vs look at all the useful things we've got from the NASA space program", etc. Where do you fall?

1mattclancy2yThe biggest single thing is the government simply pays for a very large chunk of science! In 2019, the federal government paid for 41% of US scientific research. [1] [#fnfd8s0g8kij8]That was more than the private sector (33%), university system (13%), or the philanthropic sector (10%). It's true that if the US government stepped back these other sectors would probably step forward to shoulder some of the burden, but I doubt they would cover the majority of the short-fall. More broadly, I think science and innovation is pretty hard to predict. That means we ideally want an innovation ecosystem that explores and tries lots of different approaches, even if some of those approaches don't seem to hold the highest promise at there outset. One way you can get that is to have a private sector that is open to new entrants and startups who want to try different stuff than the incumbents. But those new firms are still going to be ultimately chasing the same signal as everyone else, namely profit, which might limit the extent to which this ecosystem explores the potential space of technological possibility. So I think it's useful to have some innovating organizations who are pursuing goals entirely untethered from market success. The government is one viable solution to this problem of getting a bit more exploration into the innovation ecosystem. I haven't seen a study that tries to rigorously quantify the value of, e.g., NASA or DARPA, in terms of their spillovers to the rest of the economy, but I would be surprised if they don't come out looking like good investments. 1. ^ [#fnreffd8s0g8kij8]From table RD-3 of the NSF science and engineering [https://ncses.nsf.gov/pubs/nsb20225/data#table-block] report, taking basic research as a proxy for science.
Peter Thiel’s Pessimism Is (Largely) Mistaken

I'm largely sympathetic to this viewpoint, and the evidence seems clear-cut. Nevertheless, what I think people like Theil are alluding to, along with J. Storrs Hall in Where is my flying car, is that we could have had so much more progress (including flying cars, nuclear fusion, and supersonic transportation) if it weren't for some combination of regulations, communism, wokeness and "ergophobia" or environmental romanticism, etc. I'm broadly sympathetic to this view too. 

Seems to me that at some level, it's true that we've had a lot of progress on a l... (read more)