Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an ten-minute read. Some of what we cover this month:

  • ❌ Governor Newsom vetoes California’s AI safety bill

  • 🤝 Intergenerational call-to-action with FLI x The Elders

  • 💰 $1.5 million granted to the Federation of American Scientists

  • 📚 Our new AI safety page

And much more!

If you have any feedback or questions, please feel free to send them to [email protected].

Newsom Vetoes SB 1047

The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments.

Anthony Aguirre, FLI Executive Director

After passing the California state Assembly and Senate votes, and despite overwhelming support from a diverse array of stakeholders (including Encode Justice, the National Organization for Women, SAG-AFTRA, the California Federation of Labor Unions, and 80% of the public across the U.S.), California Governor Gavin Newsom has vetoed SB 1047.

With its balanced, light-touch legislation - codifying into law the safety testing major AI companies already to claim to be doing - the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act had the potential to steer AI development towards a safer future for all. Like many have pointed out, it also would have established California as a leader in responsible AI innovation, whilst safeguarding the industry itself against disasters which could reduce the public’s trust in AI even further. As FLI Executive Director Anthony Aguirre said in his Bulletin of the Atomic Scientists op-ed about the bill last week, “For AI to be sustainable, it must be safe. As with any transformative technology, the risks imperil the benefits.”

Despite the disappointing result, this is only the beginning. The fact that SB 1047 generated so much vocal support, from so many different people and organizations, speaks to how much public concern about AI risk is increasing - and fast. The public is ready to hold Big Tech accountable, and it gives us hope that future action will be taken.

The incredible momentum and advocacy surrounding the bill reminds us that progress takes time. Even without the outcome we wanted, the dedication from Senator Scott Wiener, the bill’s sponsors, and all of the bill’s supporters beyond that (including those of you who heeded our call to action last week) has set the stage for similar efforts in the near future.

Read our statement about the veto, as covered in the Washington Post, here. Additionally, you can learn more about the widespread support that SB 1047 garnered in our two videos below:

Our $1.5 million Grant to the Federation of American Scientists

We’re proud to announce that we recently made a $1.5 million grant to the Federation of American Scientists (FAS), to support their research into global risks from AI; specifically, the potential for AI to exacerbate other global risks. With this 18-month project, FAS will explore AI’s impact on key risks like nuclear weapons, biosecurity, military autonomy, and cyber risk.

The project will feature a series of high-level workshops with leading global experts and officials on various intersections of AI and global risk. Intended to help inform policymakers working on these critical issues, the project will also include policy sprints, fellowship programs, and targeted research efforts, culminating in a 2026 international summit on AI and global risks.

FLI x The Elders: UNGA Edition

During the many UN General Assembly proceedings in New York earlier in September, FLI was proud to yet again partner with The Elders in hosting our #LongviewLeadership event.

Featuring Elders, experts, and youth leaders, the event kicked off the Intergenerational Call to Action, which emphasizes the urgent need for collaborative, bold leadership across generations to address existential threats from the climate crisis, pandemics, nuclear weapons, and ungoverned AI. As FLI President Max Tegmark emphasized during his address, governance is urgently needed to ensure that, for example, AI is developed to benefit all of humanity.

You can find a recording of the panel discussion on YouTube, and read the full Call to Action here.

Additionally, we’ve collaborated with The Elders on a new video series, featuring renowned global experts describing practical solutions to the existential risks outlined above. You can watch the entire video series now.

Updates from FLI

  • FLI’s AI Safety Summit Lead Ima Bello hosted her second AI Safety Breakfast in Paris, this time in conversation with Dr. Charlotte Stix, Head of AI Governance at Apollo AI Safety Research. Watch a recording of the first Breakfast from August with Stuart Russell here, and keep an eye out for the recording from this one coming soon.

    • Ima’s next AI Safety Breakfast will be held online on October 7, with Yoshua Bengio. RSVP to attend for free here.

    • Be sure to also check out and subscribe to Ima’s new biweekly AI Action Summit Substack, which now has a second edition out.

  • We’ve launched a new webpage dedicated to the topic of AI safety, and the need for regulation. Check it out here, and let us know if we’re missing any key resources!

  • Our policy team recently released a new living guide to AI regulatory work, budgets and programs across U.S. federal agencies. This agency map breaks down AI-related activities across the Departments of State, Energy, Commerce, and Homeland Security, as well as independent Executive Branch agencies.

  • Director of Policy Mark Brakel has a new blog post on the FLI site, cross-posted from his Substack. In it, Mark argues why the U.S. is better served by taking a cooperative approach, instead of an adversarial one, with China on AI.

  • Executive Director Anthony Aguirre spoke to the Associated Press about the need for AI regulations like in SB 1047, and how measurements like FLOPs (floating-point operations per second) present one of the best thresholds we have - for now - to measure AI models.

  • As part of our religious perspectives on AI series, giving voice to diverse spiritual viewpoints on how AI might shape humanity’s future, we have two new guest blog posts:

  • On the FLI podcast, Founders Pledge researcher Tom Barnes joined host Gus Docker for an episode on layers of defense against unsafe AI, how we can build a more resilient world, and more.

  • Ryan Greenblatt, AI safety researcher at Redwood Research, also joined the podcast to discuss AI control and its challenges, his timelines, misaligned AI, and more.

What We’re Reading

  • OpenAI’s Open Secret: OpenAI is in the news for its restructure into a for-profit company, and simultaneous departure of three executives. This article argues that the company long ago began the shift to favouring profitability over its original commitment to safety, under the helm of CEO Sam Altman.

  • Whistleblowers Protect Us. Who’s Protecting Them?: In the New York Times, why whistleblower protections - largely non-existent in the AI industry, for now - are necessary for safe AI development, especially given the lack of transparency from the AI companies developing the most advanced models.

  • Great Ideas at IDAIS: At the third International Dialogue on AI Safety last month, AI experts Stuart Russell, Andrew Yao, Yoshua Bengio, and Ya-Qin Zhang joined forces on a statement asserting that AI safety is a global public good, necessitating global cooperation. Read the full letter, signed by a number of public figures, here.