niplav

Posts

Sorted by New

Wiki Contributions

Comments

niplav's Shortform

—Evgeny Sedukhin - "Symphony of the sixth blast furnace" (1979)

Max Görlitz's Shortform

There's a long-ish exploration of this in Grand Futures ch. 1 (Details box 7), focusing on long-term projects in general. I'm eliding some footnotes and not linking the citations, for writing speed reasons:

Ongoing projects Numerous examples such as cities (Jericho 9600 BCE and onwards), mathematics and science, land reclamation, irrigation networks, canals, roads, cultivated landscapes, Japanese shrines rebuilt every few decades , etc. The Gunditjmara eel traps at Budj Bim have been maintained and modified for at least 6,700 years [1531]. Waqfs, charitable perpetuities in Islamic countries providing public goods, once owned a significant fraction of land and sometimes lasted centuries [1703]. See section 5.2 for states and 5.3.2 for long-lived organisations. Cultural practices often value continuation, shading over into gaining value by their antiquity. Numerous monuments work like this. La tombe du soldat inconnu in Paris has been guarded and with a lit eternal flame since 1920. The Senbon Torii gates of the Fushimi Inari-taisha shrine in Kyoto have been accumulating since the 8th century. [Check this!]
The Uffington White Horse in the UK has been maintained since late prehistoric times (1740-210 BC).

Future-oriented projects These projects have value that compounds over time. Examples include animal and crop breeding. Gardening. Forest planting; of special interest is the Tokugawa era reforestation of Japan [2857] and oak planting for future naval needs (e.g. New Forest 1698- and the Visingsö forest 1830-). Seed banks (e.g. Vavilov seed collection 1921-, Millennium Seed Bank Partnership 1996-, Svalbard Global Seed Vault 2008-) and archives (e.g. Arctic World Archive) aim at preserving information across time for future use or reference. The longitudinal documentary series Up (dir. Paul Almond, Michael Apted) follows the lives of 14 children since 1964 with new episodes every 7 years. Another compounding category are longitudinal or open-ended studies such as recording astronomical observations, Nile height measures using the Roda gauge (622-1922), the Central England temperature record (data from 1659-; compiled in 1950s), Celsius’ mean sea level mark at Lövgrund (1731-) [878], Rothamsted research station experiments and archives (1843-), the Morrow Plots at University of Illinois at Urbana-Champaign (1876-), the Beal seed burial experiment (1879-), the Queensland pitch drop experiment (1927-), the Harvard Study of Adult Development (1938-), the Framingham Heart Study (1948-), the Mori dark flies experiment (1954-) [1462], the Keeling CO₂ measurements (1958-), the Belyaev Fox Farm domestication experiment (1959-) [2878], the Cape Grim Air Archive (1978-), the E. coli long-term evolution experiment (1988-) [2284, 1011].

Long-term endpoint These projects may be divided into accidentally long-term because they take more time than wished for, and deliberately long-term because the only way of achieving the goal is to continue long enough. Accidentally long-term endpoints include many projects like the British Channel Tunnel, the Panama Canal or the Olmos Irrigation project have been begun, interrupted, resumed and eventually completed (tunnel first proposed in 1802, final project 1986-1990; canal first proposed 1668, final project 1881-1914; irrigation 1924, final project 2006-2011). 2nd Avenue Subway in New York (proposed 1920, started 1942, second phase expected to open 2027–2029). Cathedral building (e.g. Notre Dame 1163-1345, Milan 1386-1965, Sagrada Familia 1882-). Thesaurus Linguae Latinae began in 1894, expected to take 20 years: current expectation is completion around 2050. Many other dictionaries are ongoing, like Svenska Akademins Ordbok (begun 1787, as of 2019 having reached late ’V’). Deutsches Wörterbuch was completed in 1838-1961 and Oxford English Dictionary 1857-1928. The LIGO project began in 1983 and succeeded in 2016, although it had organisational prehistory at least going back to ∼1970 [685, Table C-8, p. 111]. The ITER fusion project began in 1988 and will complete by 2035-2040. Predator removal in New Zeeland (2015-2050).
These exemplify the intermediate kind of projects that are long-term because they are expected to be hard. As for more deliberate long-term endpoints, many time capsules [1496] and artworks have clear endpoints. Framtidsbiblioteket in Oslo is an art project that aims to collect an original work by a popular writer every year from 2014 to 2114, remaining unread and unpublished until 2114 when they will be printed on paper from 1000 Norwegian spruce trees planted in 2014.
100 Years is a film written by John Malkovich and directed by Robert Rodriguez in 2015. Advertised in 2015 it is due to be released on November 18, 2115. The Breakthrough Starshot project aims at launching laser-powered crafts to one or more nearby stars at speeds making them arrive within decades to a century. Benjamin Franklin set up two philanthropic trusts intended to last 200 years 1790-1990; unlike many other charities they survived, although spending was not always according to the formal intentions. Tree Mountain – A Living Time Capsule by Agnes Denes is a land art project started in 1992 and intended to last for 400 years, slowly developing into a primary forest. Play As Slow As Possible by John Cage is being played in the St. Burchardi church in Halberstadt, starting in 2001 and intended to end in 2640. Longplayer by Jem Finer is a 1,000 year composition (1999-2999) being played in London. The Clock of the Long Now aims for 10,000 years of function. Like the other art projects the value lies less in something achieved at this point as the demonstration of a time-spanning project.

I would add the (finished) ig-nobel price-worthy knuckle-cracking experiment by Donald Unger.

Wizards and prophets of AI [draft for comment]

Merely pausing doesn't help if we aren't doing anything with that time.

True, I was insufficiently careful with my phrasing.

Wizards and prophets of AI [draft for comment]

Thanks for the great article :-)

I am commenting as someone who has spent a lot of time thinking about AI alignment, and considers themselves convinced that there is a medium probability (~65%) of doom. I hope this is not intrusive on this forum!

I hadn't considered the crux to be epistemic, which is an interesting and important point.

I would be interested in an attempt to quantify how slowly humanity should be moving with this: Is the best level comparable to the one with genetic engineering, or nuclear weapon proliferation? Should we pause until our interpretability techniques are good enough so that we can extract algorithms from AlphaFold2?

I am also interested in possible evidence that would convince you of the orthodox ("Bostrom-Yudkowsky") view: what proofs/experiments would one need to observe to become convinced of that (or similar) models? I have found especially the POWER-seeking theorems and the resulting experiments enlightening.

Again, thank you for writing the article.

Wizards and prophets of AI [draft for comment]

Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?

As someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing.

I think there is an argument for TAI x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it.

Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict, it is very clear who would win.

So either one would need to believe that current humanity is very near the ceiling of capability, or that we are not able to create more capable beings. (Which, in narrow domains, has turned out false, and the range of those domains appear to be expanding).

If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.

I claim this is not so outlandish, the current US would win against the 13th century 1000/1000 times. And here's a fairly fine-grained scenario detailing how that could happen with a single agent trapped on the cloud.

But—it need not be that strict a framing. Humanity losing control might look much more prosaic: We integrate AI systems into the economy, which then over time glides out of our control.

In general, when considering what AI systems will act like, I try to simulate the actions of a plan-evaluatior, perhaps an outlandishly powerful one.

Edit: Tried to make this comment less snarky.