Future of Life Institute Newsletter: One Big Beautiful Bill...banning state AI laws?!

Plus: Updates on the EU AI Act Code of Practice; the Singapore Consensus; open letter from Evangelical leaders; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a 12-minute read. Some of what we cover this month:

  • ⛪ Evangelical leaders pen letter to President Trump on AI

  • 🇪🇺 Updates on the EU AI Act Code of Practice

  • ⚖️ What is federal preemption, and what does it have to do with AI?

  • 🇸🇬 The Singapore Consensus

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

Christian Leaders Urge Wise AI Leadership

A coalition of prominent Christian leaders have issued an open letter urging President Donald J. Trump to help ensure America will "lead the world in beneficial AI innovation, responsibly".

Signatories include Rev. Johnnie Moore, President of the Congress of Christian Leaders, and Rev. Samuel Rodriguez, President of the National Hispanic Christian Leadership Conference.

"As people of faith," the letter states, "we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control."

The leaders wrote of the great promise but also potential peril presented by rapidly-advancing AI, and suggest convening an advisory council of leaders to “pay attention especially not only to what AI CAN do but also what it SHOULD do”.

Similar to concerns expressed by both the late Pope Francis and new Pope Leo XIV, the letter warns against a future where human work and purpose are undermined by AI.

The full letter and list of signatories can be found here, along with coverage in TIME here.

What’s Happening with the EU AI Act Code of Practice?

The EU is currently drafting its Code of Practice, establishing concrete guidelines for providers of the largest AI models to comply with the rules set forth in the EU AI Act.

The Code, informed by input from over 1,000 stakeholders and shaped by, for example, Yoshua Bengio, ‘Godfather of AI’ and the world’s most-cited AI researcher, is designed to harmonize a patchwork of AI rules across EU member states, allowing model providers to better understand their obligations.

As in the AI Act, it addresses the same high-stakes AI risks many experts share concern about, such as loss of control, bioweapons manufacturing, and cyberattacks. The Code formalizes existing voluntary commitments many companies have already made, e.g., at the Seoul Summit. Crucially, it targets only big (primarily American) AI companies - not startups or downstream users.

Unfortunately, Big Tech is lobbying to scale back the regulations they’ll soon face, despite helping inform the rules laid out in the Act, which the Code just translates into guidelines. They argue that it will stifle innovation, but as FLI’s Risto Uuk and Estonian entrepreneur Sten Tamkivi point out in Fortune, the idea that much-needed AI safeguards in digital regulation would harm European innovation is a red herring: the real culprit is old bureaucracy and red tape hindering start-ups.

For updates on the Code as it continues finalization through political and expert input, stay tuned via Risto Uuk’s EU AI Act newsletter.

A 10-Year Ban on State AI Laws?!

As you may have heard, the U.S. House of Representatives last week passed the ‘One, Big, Beautiful Bill’, a budget reconciliation bill, which is now with the Senate. One particularly controversial inclusion is a 10-year moratorium on states passing their own AI legislation.

A strong bipartisan coalition has come out against this provision, referred to as preemption. For example, in a recent letter, a group of 40 state attorneys general from both major parties urged Congress to reject the moratorium, warning it “would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers”.

Additionally, a new poll by Common Sense Media finds widespread concerns about the potential negative effects of AI, especially on youth, and that 73% of voters across party lines want both states and the federal government to regulate AI. The proposed federal ban itself is unpopular: 59% of voters oppose it, and 52% say it makes them less likely to support the budget bill entirely.

We’ll keep you posted on what happens next!

Apply to our Digital Media Accelerator!

Our Digital Media Accelerator remains open to applications, to fund creators looking to produce content, grow channels, and spread the word to new audiences about complex AI issues (e.g., loss of control to AGI, misaligned goals, and Big Tech power concentration) to new audiences. We’re looking to fund content across platforms, such as YouTube explainers, TikTok series, podcasts, newsletters, and more.

Already have an audience? Want to create compelling content about AI risks?
We’re accepting applications on a rolling basis - apply here and help shift the conversation. Please share widely with anyone you think may be interested!

Updates from FLI

  • FLI’s Anna Yelizarova published the inaugural essay for the AGI Social Contract anthology, exploring a global dividend system to share the value created by AI.

  • FLI’s Anna Hehir and Maggie Munro published a new edition of The Autonomous Weapons Newsletter, offering a rundown of key events and takeaways from the recent UN New York autonomous weapons talks! Check it out here.

  • Anna was also interviewed by Global Dispatches for a podcast episode on the historic UN talks.

  • FLI President Max Tegmark was at the ATxSummit in Singapore this week, joining for a talk on what to look forward to with AI (and how we can reach it):

  • Max also spoke to Bloomberg about the rapid pace of AI development: “What a lot of people are underestimating is just how much has happened in a very short amount of time... Things are going very fast now.”

  • FLI’s Futures Program Director Emilia Javorsky wrote an op-ed for The Hill against ‘vibes-based’ policymaking: “The public shouldn’t have to choose between reckless acceleration and anti-tech stagnation. There’s a third path: progress with safeguards.”

  • On the FLI podcast, writer Zvi Mowshowitz joined host Gus Docker for an episode to discuss AI agents, sycophantic AI, how AI is different from other technology, and more.

  • Also on the podcast, philosopher Jeff Sebo joined to discuss all things AI consciousness.

  • And most recently, computer scientist Ben Goertzel joined for an episode to discuss what the ‘singularity’ might look like, bottlenecks to it, and how humanity should proceed.

What We’re Reading

  • Singapore Consensus: The Singapore Consensus on Global AI Safety Research Priorities was released this month, following April’s Singapore Conference on AI. Building on the International AI Safety Report backed by 33 countries, the Singapore Consensus aims to enable more impactful research and development to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good. Read the report, resulting from 100+ contributors from 11 different countries, here.

  • ‘White-collar bloodbath’: Anthropic CEO Dario Amodei spoke to Axios with a warning about what the AI models Anthropic and others are developing could do to livelihoods, and society at large, in the next few years, suggesting '“AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years”. You can read the full Axios article here for details on how exactly that “white-collar bloodbath” might unfold.

  • What we’re watching: Famous investor Paul Tudor Jones appeared on CNBC to express his concerns about the “imminent security threat” posed by AI:

  • What we’re watching: On the topic of state AI legislation, The Inside View published an insightful documentary on the “bill that broke Silicon Valley”, California’s proposed SB 1047 bill, which, despite widespread support for the common-sense AI rules it proposed, was vetoed by Gov. Gavin Newsom in 2024. Watch it below: