Even a global treaty pausing all new training of frontier AI models, would leave a lot of room for existing models to continue to disrupt.

Our existing workflows weren’t designed with AI agents in mind and there’s abundant low hanging fruit for optimization. Every time I use Codex or Claude Code, I learn something new, and there’s plenty of things I just don’t have time to try.

Beyond code there’s plenty of other domains. Claude Cowork, the 10 folders of markdown files that caused the SaaSpocalypse, Claude Design, and more aren’t new models, they’re just harnesses for making it easier to work with existing ones.

What will be the impact on the economy once even our existing systems are widely adopted?


There’s a bunch of good things that we get for free by pausing AI progress.

The technology as it exists today would diffuse. The public lags behind what those of us interested in AI see.

More election cycles gives more opportunity for more informed and reasonable lawmakers to be elected. It also gives us room to negotiate and dream up better AI policies.

Nothing about pausing means AI safety research needs to stop. Who knows what novel problems or breakthroughs we might find in the next years?


Bernie Sanders and Alexandria Ocasio-Cortez’ AI Data Center Moratorium Act is just 1853 words — well worth reading all of.

Half of the bill is a sizzle reel of quotes from experts and industry leaders. The functional section of the bill is tiny:

  • 154 words define “Artificial Intelligence Data Center”
  • 287 words set the terms of the moratorium: no new AI data centers until AI is safe and built to be mutually beneficial for all Americans
  • 275 words requisition various reports, and give the Secretary of Energy the power to verify that the moratorium is being followed
  • 161 words ban the export of chips to countries not implementing similar measures

It’s a far cry from a global treaty. But it takes action now buying time and leverage to get a more comprehensive treaty implemented.

The bill cedes our largest advantage to China. Chinese models are only 3-9 months behind the US frontier, slowing down puts us at risk of losing the frontier.

This is costly, but perhaps might also serve as a signal. How can we better demonstrate our commitment to AI safety?

AI labs have frequently say that they want AI regulation. I also look at this bill as an invitation to the labs: help us figure out how to make AI mutually beneficial for society, and then you can go on building the data centers you want.

My largest quibble is with some of the most concrete details. To me one of the benefits of a sloppy pause are that it gives us time to contemplate what better legislation might look like. Enforcing that AI data center development be done with union labor while nice, seems far from necessary and I’d prefer a bill with less provisions about what is necessary to end the moratorium.


I’m confident that If Anyone Builds It, Everyone Dies. Will we be able to stop, or are the rewards from training better and better AI models too tempting? How long would it take us to stop even if policy makers unanimously agreed that we should?

Perhaps we should slow down as much as we can now.