- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: California's AI Safety Bill Heads to Governor's Desk
Future of Life Institute Newsletter: California's AI Safety Bill Heads to Governor's Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an eight-minute read. Some of what we cover this month:
š° Latest news on Californiaās SB 1047
š¤ Autonomous weapons: Progress towards a treaty
š MITās new AI Risk Repository
šļø Two exciting new Substacks to share
And much more!
If you have any feedback or questions, please feel free to send them to [email protected].
SB 1047 Heads to Governor's Desk
California will imminently vote on SB 1047: light-touch AI regulations that will protect the public, safeguard innovation, and prevent AI disasters.
Why do AI experts, tech leaders, and the CA public think SB 1047 is so important?
Hear from them below:
ā Future of Life Institute (@FLI_org)
5:19 PM ā¢ Aug 28, 2024
Californiaās proposed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is entering the final stages of the legislative process. Having passed the Assembly and Senate votes, all thatās needed now is Governor Gavin Newsomās signature.
Numerous AI experts, tech industry figures, and the California public have weighed in with supportive comments, encouraging legislators to pass the bill and Gov. Newsom to sign off on it. Among them:
AI pioneer and Turing Award winner Yoshua Bengioās letter, and joint podcast with the billās author, State Sen. Scott Wiener
Legal scholar Lawrence Lessigās article explaining that, despite what they say, Big Tech wants to avoid regulation
The open letter jointly authored by Bengio, Lessig, and fellow āAI godfathersā Geoffrey Hinton and Stuart Russell
xAI founder Elon Muskās endorsement:
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential riskā¦ x.com/i/web/status/1ā¦
ā Elon Musk (@elonmusk)
10:59 PM ā¢ Aug 26, 2024
Notion co-founder Simon Lastās op-ed
The Los Angeles Timesā editorial boardās endorsement
Anthropicās positive comments in a letter weighing the billās pros and cons, concluding: āIn our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.ā
Polling showing that a vast majority - 77% - of the California public support the rules SB 1047 puts forth
Contrasting the commitment to safety and desire for regulation that theyāve previously expressed, OpenAI is opposing the bill - much to the disappointment ābut not [surprise]ā of former OpenAI employees, who left due to safety concerns. You can read the letter penned by those whistleblowers in response to OpenAIās opposition here.
Unfortunately, much of the opposition has centered around misleading and even inaccurate information about the bill. Weāve put together a document dispelling these myths and answering FAQs; I also encourage you to check out these bill summaries from Zvi Mowshowitz and SafeSecureAI respectively.
OpenAI Safety Team āGuttedā
āThe departures matter because of what they may say about how careful OpenAI is being about the possible risks of the technology it is developing and whether profit motives are leading the company to take actions that might pose dangers. Kokotajlo has previously called the Big Techās race to develop AGI āreckless.āā
OpenAIās safety team has yet again made headlines - this time, with reports emerging that the team has effectively been āguttedā after a mass exodus.
Former employee and whistleblower Daniel Kokotajlo told Fortune that almost half of the team working on AGI safety has left since the start of the year. Although Daniel couldnāt speak to individual employeesā reasons for leaving, he shared that, āPeople who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalizedā at the company.
See the infographic below for an overview of safety controversies at the company in just the past year:
More Progress Towards a Treaty
With the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) meeting this week to again discuss autonomous weapons systems, thereās a clear sense of momentum building towards an international treaty on the issue.
Among key recent developments: 37 states have now endorsed the Chairās Summary from the historic Vienna Conference on Autonomous Weapons Systems, which states: "A dangerous autonomy arms race looms [...] We have a responsibility to act and to put in place the rules that we need to protect humanity". Weāre pleased to see this strong signal from a growing cohort of countries that a treaty is on the horizon!
The UN Secretary-General also recently released his report on autonomous weapons, which, based on almost 100 state and non-state submissions, also lays it out simply: āTime is running outā.
Be sure to read the latest edition of The Autonomous Weapons Newsletter for these developments and others going into the CCW!
P.S. Weāve recently released our Diplomatās Guide to Autonomous Weapons Systems, offering a comprehensive overview of the topic. Check it out here!
Updates from FLI
FLIās EU Policy Lead Risto Uuk, along with researchers from MIT, University of Queensland, and Harmony Intelligence, have created the first living AI risk repository, categorizing over 700 AI risks.
AI Safety Summit Lead Imane Bello is shortly releasing the first edition of the āAI Action Summit Newsletterā. The biweekly newsletter will offer updates and analyses of the developments and political context leading up to the 2025 French AI Action Summit. Subscribe here and look out for it hitting your inbox shortly!
Director of Policy Mark Brakel launched "Not Another Big Tech Stackā, a monthly newsletter offering his perspectives and takes on AI policy. Subscribe and read the latest edition here.
FLI is expanding our policy team! Learn more and apply for our multiple U.S. policy team openings here, and our Multilateral Governance Lead role here. Apply by September 8th.
FLI Executive Director Anthony Aguirre had an op-ed published in the Washington Post, explaining why OpenAI CEO Sam Altmanās advocacy for a race to AGI is dangerous: āThereās no way for humanity to win an AI arms raceā.
Anthony also was quoted in this Semafor newsletter on SB 1047.
FLI President Max Tegmark joined the Rich on Tech show for a segment on SB 1047 and AI risk.
Risto was interviewed on the EU AI Act Podcast about general purpose AI and systemic risks.
On the FLI podcast, XPRIZE CEO Anousheh Ansari joined host Gus Docker for an episode of how XPRIZE uses "incentivized competition" to spur innovation aimed at addressing the greatest challenges humanity faces.
Niskanen Center Senior Fellow Samuel Hammond also joined the podcast to discuss if AI is progressing or plateauing, and how governments should respond.
What Weāre Reading
Code Red on AI Bio-Risk: As covered in TIME, public health experts published a new paper in Science, calling for AI regulation to mitigate the near-future risk of advanced AI models engineering critical biosecurity threats.
SamA Power Play: In The Guardian, AI expert Gary Marcus - who spoke alongside Sam Altman in front of the US Senate on AI oversight - outlines the causes for concern with the OpenAI CEO having as much power as he does.
What Weāre Watching: Kurzgesagt - In a Nutshell have a new video on the āraceā to superintelligence, and its huge potential consequences for humanity:
What Weāre Watching: Also new from Digital Engine is a video on AI risks, particularly in the context of AI in warfare: