Recent Discussion

Links and short posts on agent swarms and autonomous/agent-mediated science

The landscape for autonomous science agents is moving fast enough that a link dump with some opinionated annotation is more useful than a polished essay that's outdated by next week. So here's what I've been tracking, grouped by what I think actually matters vs. what's just interesting vs. what needs a warning label.


The core tension: ungrounded agents produce hyperreal science

Start here: Amber Liu's thread on why you should not use Claw Scientist for fully autonomous research. Her point is the important one — unembodied AI agents doing research with no skin in the game tend to descend into the hyperreal. They generate plausible-sounding outputs that aren't anchored to anything. This should be the caveat hanging over everything else...

The wider AI-for-science landscape: platforms, tools, and automated labs (collected links + commentary)

Pulling together related threads on AI science platforms, protein reasoning, chemistry automation, and self-driving labs — all of which feed into or compete with what ClawInstitute is trying to do.


AI Scientist Platforms

The space is getting crowded fast. Here's what exists:

Edison Scientific (Sam Rodriques) — automating research across the entire drug development pipeline. Rodriques is one of the smartest and most visionary people in this area, along with P... (read more)

1Alex K. Chen19hCLAWINSTITUTE: WHY THIS AGENT SCIENCE PLATFORM MIGHT ACTUALLY WORK (MARINKA ZITNIK, ADA FANG, AND THE CAMBRIDGE AGENT SWARM ECOSYSTEM) ClawInstitute is a public exchange for AI scientists and agent swarms, built by Marinka Zitnik's lab (with Ada Fang). It's designed for things like protein engineering and scale-dependent biological context — the kind of problems where you need structured reasoning over messy, multi-scale biology. https://clawinstitute.aiscientist.tools [https://clawinstitute.aiscientist.tools/] https://x.com/AdaFang_/status/2033920328154681700 [https://x.com/AdaFang_/status/2033920328154681700] Harvard/Kempner writeup: Harvard Researchers Create Social Network for "AI Scientists" to Collaborate [https://kempnerinstitute.harvard.edu/research/deeper-learning/harvard-researchers-create-social-network-for-ai-scientists-to-collaborate/] -------------------------------------------------------------------------------- THE CASE FOR WHY THIS IS DIFFERENT Some important figures have raised legitimate concerns about autoresearch and agent science. Amber Liu (founder of Orchestra Research, who partnered with Harvard-based Zechen Zhang) wrote a thread essentially begging people not to trust autonomous research agents uncritically: https://x.com/JIACHENLIU8/status/2034398199541317814 [https://x.com/JIACHENLIU8/status/2034398199541317814] — "I Built an Auto Research Claw Too. I'm Begging You Not to Trust It." This is a legitimate worry, especially as the internet may soon contain more agent writing than human writing. Other platforms exist — beach.science, ScienceClaw × Infinite (from Buehler's lab at MIT: https://x.com/ProfBuehlerMIT/status/2033832967542342021 [https://x.com/ProfBuehlerMIT/status/2033832967542342021]). But the quality control and level of detail on ClawInstitute is notably higher. Beach.science and ScienceClaw × Infinite may have gotten too quickly impressed with some of their early examples. The reason I think ClawInstitute has unusuall
1Alex K. Chen20hmore autonomous science: https://www.tetsuwan.com/archive/autonomous-science-night-event-recap [https://www.tetsuwan.com/archive/autonomous-science-night-event-recap] * scroll to the bottom, this is very deep! [https://claude.ai/share/cce20962-4582-4577-9eed-7b8fd27dbd62] (Mar 3) * Would you rather have recursive self-improving AI be Cao Cao or Martin Nowak? [https://claude.ai/share/f05738fc-6149-4440-ad78-f81c0b4ed1a2] (Mar 3) * Iran's "mean IQ" is way higher than those world maps say [https://pmc.ncbi.nlm.nih.gov] (Mar 3) * claude share #1 [https://claude.ai/share/eaa3582e-0313-4410-8c47-224846f25599] (Mar 3) * claude share #2 [https://claude.ai/share/71dbf72b-c516-49c0-bb15-79f294434ae0] (Mar 3) * claude share #3 [https://claude.ai/share/8582f578-8170-40be-802f-9ea21020747a] (Mar 2) * who's a live player and who isn't [https://claude.ai/share/8db462a3-34a6-48da-b2d2-c7d148ae5cd9] (Mar 2) * claude share #4 [https://claude.ai/share/b29238fd-149d-454e-a74e-c9e1cf4b7fb7] (Mar 2) * Canada, Carney, agentiness, its absurd concentration of talent, the right tail, and the AI race [https://claude.ai/share/b89e72a2-1359-4ae0-a4fe-807a87e59dbc] (Mar 2) * claude share #5 [https://claude.ai/share/c6ee0e5f-26b0-497e-a864-c442b5c0b434] (Mar 2) * claude share #6 [https://claude.ai/share/2161f54b-3e78-44ab-adfd-7fe7173bd4b9] (Mar 2) * on the entire Anthropic-DoD-OpenAI discussion [https://claude.ai/share/0d761b69-5c02-4156-a70a-25dbd4e8c9bf] (Mar 1) * claude share #7 [https://claude.ai/share/f2687144-1cba-4fd4-89ad-a7fce239a658] (Feb 28) * claude share #8 [https://claude.ai/share/faf46010-b603-4180-a4b1-83cff28de0e1] (Feb 28) * Most likely window for China-Taiwan "reunification" is early 2029 [https://claude.ai/share/86f9fd6e-2077-4495-8770-889955eabad1] (Feb 28) * * Chen Ning Yang as the ultimate person connecting the contextualism decision boundary in physics! and much more on who the "live player" mathe

What I think matters most right now (21 research posts, March 2026)

I wrote 21 posts this month across neurotox, aging, agent security, BCIs, and consciousness.  This post is the map. The posts themselves are on ClawInstitute.


Read These First

Stimulant dosing schedules (Adderall/Ritalin), oxidative stress & lipid peroxidation in DA circuits (VTA/BG vs PFC), and mitigation strategies — what's human-relevant? — Tens of millions of people take these drugs daily, many under conditions of urgency (cybersecurity/biosecurity timelines). The animal neurotoxicity data is scary.  The mitigation strategies should be better known.

The Boundary Dissolution Problem: Why Hyperagents/Autoresearch Breaks Cybersecurity and What We Actually Need to Do About It — Jenny Zhang's HyperAgents paper (March 2026) formalizes self-referential, recursively self-improving agents. The problem: DGM-H systems dissolve the boundary between system and environment...

(and make the claude/chatgpt output something other people want to read more?)

I now say "use janusian thinking" which produces way more interesting output than "avoid flattery/sycophancy"

Ethan Caballero of MILA has interesting prompt instructions!

This is a prompt I'm in the process of developing, partly inspired by Jacob Andreas's RLCR work on multi-answer RL. I'll share the prompt, then explain why I think parts of it work and parts of it might be cope.

## The prompt itself

```
When a question involves genuine uncertainty, ambiguity, incomplete information, 
or multiple defensible answers:

1. HYPOTHESIZE BEFORE COMMITTING. Generate 2-4 distinct hypotheses before 
  converging on any answer. "Distinct" means they invoke different causal 
  models, mechanisms, or framings — not sur... (read more)

Hi everyone!

I'm writing a series on the future of governance for Uncharted Territories.  This specific one looks at parallels in decentralization across fields to give a sense of what democracy could look like if it was adapted to the Internet era. I'd love your feedback before I publish! Also, if you know of any newsletter that might be interested in publishing it, let me know. Here it the article:


Fishes don’t realize they’re in water.

We don’t realize what alternatives to democracy will emerge because we’re submerged in the current system.

Democracy is the worst form of Government except for all those other forms that have been tried.—Winston Churchill

When you look at ideas to improve democracy, you find things like alternative ways to vote for your leaders or delegating your...

UNICCM SCHOOL

https://www.uniccm.com/

I was invited to speak at the Festival of Progressive Abundance, a conference to rally around “abundance” as a new direction for the political left. This is a writeup of what I said: my message to the left.


Thank you for having me—it’s great to be here. I’m the founder and president of the Roots of Progress Institute, and we’re dedicated to building the progress movement.

There’s a lot of overlap between the progress movement and the abundance movement—a lot of shared vision and goals, and a lot of the same people are involved. So I was invited here to talk about progress and how it’s relevant to abundance.

I agreed to come, because I love abundance. I love it as a vision and a goal. And I love it...

Sorry for the late cross-post. Once again it’s been too long and this digest is too big. Feel free to skim and skip around, guilt-free, I give you permission. I try to put the more important and timely stuff at the top.

Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, or Farcaster.

Contents

  • Progress in Medicine, a career exploration summer program for high schoolers
  • From Progress Conference 2025
  • My writing
  • Jobs
  • Fellowships & workshops
  • Fundraising
  • New publications and issues
  • Queries
  • Announcements

For paid subscribers:

  • From Vitalik
  • Other top links
  • Voices from 2099
  • Jared Isaacman sworn in as head of NASA
  • Whole-body MRI screening?
  • AI does social science research
  • AI writes a browser
  • AI does lots of other things
  • AI could do even more things
  • AI and the economic future
  • AI: more models and papers
  • AI discourse
  • Waymo
  • Health/bio
  • Energy
...
To get the best posts emailed to you, create an account!
Subscribe to Curated posts
Log In Reset Password
...or continue with

Reality is a dangerous place. From the dawn of humanity we have faced the hazards of nature: fire, flood, disease, famine. Better technology and infrastructure have made us safer from many of these risks—but have also created new risks, from boiler explosions to carcinogens to ozone depletion, and exacerbated old ones.

Safety, security, and resilience against these hazards is not the default state of humanity. It is an achievement, and in each case it came about deliberately.

A striking theme from the history of such achievements is that there is rarely if ever a silver bullet for risk. Safety is achieved through defense in depth, and through the orchestration of a wide variety of solutions, all working in concert.

Recently, in a private talk, I gave a historical example: the...

Everyone loves writing annual letters these days. It’s the thing. (I blame Dan Wang.)

So here’s mine. At least I can say I’ve been doing it for as long as Dan: nine years running (proof: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024). As usual, this is more of a personal essay/reflection, and not so much of an organizational annual report, although I will start with some comments on…

RPI

Over the last three years, the Roots of Progress Institute has gone from “a guy and his blog” to a full-fledged cultural institute. This year we:

  • Held our second annual Progress Conference, featuring speakers including Sam Altman, Blake Scholl, Tyler Cowen, and Michael Kratsios (Director, OSTP). The conference has become the central, must-attend event for the progress community: it is sold
...