Future of Life Institute Newsletter: California's AI Safety Bill Heads to Governor's Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an eight-minute read. Some of what we cover this month:

  • šŸ“° Latest news on Californiaā€™s SB 1047

  • šŸ¤ Autonomous weapons: Progress towards a treaty

  • šŸ“š MITā€™s new AI Risk Repository

  • šŸ—žļø Two exciting new Substacks to share

And much more!

If you have any feedback or questions, please feel free to send them to [email protected].

SB 1047 Heads to Governor's Desk

Californiaā€™s proposed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is entering the final stages of the legislative process. Having passed the Assembly and Senate votes, all thatā€™s needed now is Governor Gavin Newsomā€™s signature.

Numerous AI experts, tech industry figures, and the California public have weighed in with supportive comments, encouraging legislators to pass the bill and Gov. Newsom to sign off on it. Among them:

  • AI pioneer and Turing Award winner Yoshua Bengioā€™s letter, and joint podcast with the billā€™s author, State Sen. Scott Wiener

  • Legal scholar Lawrence Lessigā€™s article explaining that, despite what they say, Big Tech wants to avoid regulation

  • The open letter jointly authored by Bengio, Lessig, and fellow ā€œAI godfathersā€ Geoffrey Hinton and Stuart Russell

  • xAI founder Elon Muskā€™s endorsement:

  • Notion co-founder Simon Lastā€™s op-ed

  • The Los Angeles Timesā€™ editorial boardā€™s endorsement

  • Anthropicā€™s positive comments in a letter weighing the billā€™s pros and cons, concluding: ā€œIn our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.ā€

  • Polling showing that a vast majority - 77% - of the California public support the rules SB 1047 puts forth

Contrasting the commitment to safety and desire for regulation that theyā€™ve previously expressed, OpenAI is opposing the bill - much to the disappointment ā€œbut not [surprise]ā€ of former OpenAI employees, who left due to safety concerns. You can read the letter penned by those whistleblowers in response to OpenAIā€™s opposition here.

Unfortunately, much of the opposition has centered around misleading and even inaccurate information about the bill. Weā€™ve put together a document dispelling these myths and answering FAQs; I also encourage you to check out these bill summaries from Zvi Mowshowitz and SafeSecureAI respectively.

OpenAI Safety Team ā€œGuttedā€

ā

ā€œThe departures matter because of what they may say about how careful OpenAI is being about the possible risks of the technology it is developing and whether profit motives are leading the company to take actions that might pose dangers. Kokotajlo has previously called the Big Techā€™s race to develop AGI ā€˜reckless.ā€™ā€

Sharon Goldman in Fortune

OpenAIā€™s safety team has yet again made headlines - this time, with reports emerging that the team has effectively been ā€œguttedā€ after a mass exodus.

Former employee and whistleblower Daniel Kokotajlo told Fortune that almost half of the team working on AGI safety has left since the start of the year. Although Daniel couldnā€™t speak to individual employeesā€™ reasons for leaving, he shared that, ā€œPeople who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalizedā€ at the company.

See the infographic below for an overview of safety controversies at the company in just the past year:

More Progress Towards a Treaty

With the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) meeting this week to again discuss autonomous weapons systems, thereā€™s a clear sense of momentum building towards an international treaty on the issue.

Among key recent developments: 37 states have now endorsed the Chairā€™s Summary from the historic Vienna Conference on Autonomous Weapons Systems, which states: "A dangerous autonomy arms race looms [...] We have a responsibility to act and to put in place the rules that we need to protect humanity". Weā€™re pleased to see this strong signal from a growing cohort of countries that a treaty is on the horizon!

The UN Secretary-General also recently released his report on autonomous weapons, which, based on almost 100 state and non-state submissions, also lays it out simply: ā€œTime is running outā€.

Be sure to read the latest edition of The Autonomous Weapons Newsletter for these developments and others going into the CCW!

P.S. Weā€™ve recently released our Diplomatā€™s Guide to Autonomous Weapons Systems, offering a comprehensive overview of the topic. Check it out here!

Updates from FLI

  • FLIā€™s EU Policy Lead Risto Uuk, along with researchers from MIT, University of Queensland, and Harmony Intelligence, have created the first living AI risk repository, categorizing over 700 AI risks.

  • AI Safety Summit Lead Imane Bello is shortly releasing the first edition of the ā€œAI Action Summit Newsletterā€. The biweekly newsletter will offer updates and analyses of the developments and political context leading up to the 2025 French AI Action Summit. Subscribe here and look out for it hitting your inbox shortly!

  • Director of Policy Mark Brakel launched "Not Another Big Tech Stackā€, a monthly newsletter offering his perspectives and takes on AI policy. Subscribe and read the latest edition here.

  • FLI is expanding our policy team! Learn more and apply for our multiple U.S. policy team openings here, and our Multilateral Governance Lead role here. Apply by September 8th.

  • FLI Executive Director Anthony Aguirre had an op-ed published in the Washington Post, explaining why OpenAI CEO Sam Altmanā€™s advocacy for a race to AGI is dangerous: ā€œThereā€™s no way for humanity to win an AI arms raceā€.

  • Anthony also was quoted in this Semafor newsletter on SB 1047.

  • FLI President Max Tegmark joined the Rich on Tech show for a segment on SB 1047 and AI risk.

  • Risto was interviewed on the EU AI Act Podcast about general purpose AI and systemic risks.

  • On the FLI podcast, XPRIZE CEO Anousheh Ansari joined host Gus Docker for an episode of how XPRIZE uses "incentivized competition" to spur innovation aimed at addressing the greatest challenges humanity faces.

  • Niskanen Center Senior Fellow Samuel Hammond also joined the podcast to discuss if AI is progressing or plateauing, and how governments should respond.

What Weā€™re Reading

  • Code Red on AI Bio-Risk: As covered in TIME, public health experts published a new paper in Science, calling for AI regulation to mitigate the near-future risk of advanced AI models engineering critical biosecurity threats.

  • SamA Power Play: In The Guardian, AI expert Gary Marcus - who spoke alongside Sam Altman in front of the US Senate on AI oversight - outlines the causes for concern with the OpenAI CEO having as much power as he does.

  • What Weā€™re Watching: Kurzgesagt - In a Nutshell have a new video on the ā€œraceā€ to superintelligence, and its huge potential consequences for humanity:

  • What Weā€™re Watching: Also new from Digital Engine is a video on AI risks, particularly in the context of AI in warfare: