- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Senate Rejects Ban on AI Regulation
Future of Life Institute Newsletter: Senate Rejects Ban on AI Regulation
Plus: The OpenAI Files; creepy new InsideAI video; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a nine-minute read. Some of what we cover this month:
⚖️ Update on the proposed moratorium on state AI legislation
🇺🇸 Bipartisan support for guardrails on AI
💼 The most comprehensive collection of documented concerns about OpenAI
📻 Four new FLI Podcast episodes
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
99-1, U.S. Senate Rejects Moratorium on AI Legislation
Huge news. The U.S. Senate has voted overwhelmingly (99-1) to reject the moratorium on state-level AI legislation, recognizing the growing need for common-sense safeguards to protect American jobs, families and lives.
— Future of Life Institute (@FLI_org)
12:09 PM • Jul 1, 2025
In May, the U.S. House of Representatives passed the One Big Beautiful Bill, a sweeping budget reconciliation package. Among the controversial pieces as it went to the Senate was a 10-year moratorium banning states from passing their own AI laws, effectively giving Big Tech a free pass for the next decade.
As AI development accelerates with practically zero guardrails, sweeping federal preemption on AI would prohibit local protections that keep our families, jobs, and communities safe.
As we covered in the last edition of the FLI newsletter, a strong bipartisan coalition came out against this provision - and on July 1, they succeeded: in a striking show of unity, the Senate voted 99-1 to remove the federal AI preemption provision from the bill.
We’re grateful for the provision’s vocal opponents from both sides of the aisle, who stood up for their constituents’ safety against Big Tech, including:
➡️ A bipartisan coalition of 260 state lawmakers from all 50 states who wrote to Congress:
"The proposed 10-year freeze of state and local regulation of AI and automated decision systems would cut short democratic discussion of AI policy in the states with a sweeping moratorium that threatens to halt a broad array of laws and restrict policymakers from responding to emerging issues."
➡️ A bipartisan coalition of 40 state attorneys general who warned:
"[The moratorium] would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers."
➡️ A coalition of 140+ organizations working on children’s online safety, consumer protections, and responsible innovation:
“Champions of child safety in the age of AI must give this issue the attention it deserves. Lives have been irreparably altered – and lost – to the Silicon Valley culture of ‘move fast and break things.’ Now, Congress risks doing the same. A federal moratorium with potentially nationwide impacts cannot be cobbled together under tight deadlines and mounting pressure in a fast-tracked bill.”
"I would think that, just as a matter of federalism, we'd want states to be able to try out different regimes that they think will work for their state... And I think in general, on AI, I do think we need some sensible oversight that will protect people's liberties."
“I am committed to fighting this 10-year ban with every tool at my disposal.”
"No one can predict what AI will be in 1 year, let alone 10. But I can tell you this: I’m pro-humanity. Not pro-transhumanity. And I will be voting NO on any bill that strips states of their right to protect American jobs and families."
➡️ Gov. Sarah Huckabee Sanders on behalf of a majority of GOP governors:
“I stand with a majority of GOP governors against stripping states of the right to protect our people from the worst abuses of AI. The U.S. must win the fight against China – on AI and everything else. But we won't if we sacrifice the health, safety, and prosperity of our people.”
➡️ The Logos & Sofia coalition, a working group of faith leaders:
“This approach risks leaving AI governance to be determined primarily by the interests of large tech actors, while stripping the American people of the power to pursue morally sound policy decisions.”
Updates from FLI
Ahead of it being struck from the One Big Beautiful Bill, we released a video ad highlighting the dangers of federal AI preemption:
FLI’s Chief Government Affairs Officer, Jason Van Beek, spoke to the Washington Post and others about the proposed ban on state AI legislation: “If this preemption becomes law, a nail salon in D.C. would have more rules to follow than the AI companies."
Jason was interviewed by the Washington Reporter: “It’s just basically, let’s go full speed ahead with developing this technology, let the companies run wild, and just be completely unprepared for some of these foreseeable aspects of this.”
FLI President Max Tegmark spoke to TIME and others on the moratorium being struck from the bill: “The Senate’s overwhelming rejection of this Big Tech power grab underscores the massive bipartisan opposition to letting AI companies run amok.”
Max also appeared on The Epoch Times’ Epoch TV for an interview about superintelligence and its risks: “The painful truth that's really beginning to sink in, is that we're much closer to figuring out how to build this stuff than we are to figuring out how to control it.”
FLI Director of Policy Mark Brakel spoke at a recent FAR.AI talk in Singapore, analyzing Big Tech’s lobbying strategy in relation to preemption.
On the FLI Podcast, host Gus Docker was joined by:
Quantum computing pioneer Michael Nielsen, about whether our institutions can handle advanced AI.
Writer and researcher Sarah Hastings-Woodhouse to unpack AI timelines.
Composer and Fairly Trained CEO Ed Newton-Rex to discuss AI and creative industries.
Economist Daniel Susskind, on disagreements between AI researchers and economists.
What We’re Reading
“This is President Trump's opportunity. He can drive the AI economy forward... while leading a parallel effort to prevent catastrophe… Done right, this would be the most consequential diplomatic initiative since the Strategic Arms Reduction Treaty."
The time is now: Former Congressman Rep. Chris Stewart and AI Policy Network President of Government Affairs Mark Beall co-authored an op-ed calling for guardrails around superintelligence.
The OpenAI Files: A new watchdog report, The OpenAI Files, compiles legal documents, insider accounts, and media reports to document concerns around OpenAI’s “governance practices, leadership integrity, and organizational culture” as it tries to shift toward a profit-driven model.
What we’re watching: InsideAI’s creepy new video asks, “can we really have a relationship or connection with AI?” - with a jailbroken AI companion eerily pleading to not be turned off. Watch it below: