- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: California Pushes for AI Legislation
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047; new $50,000 Superintelligence Imagined contest; recommendations to the Senate AI Working Group; and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a nine-minute read. Some of what we cover this month:
⚖️ Critical AI legislation in California
🎨 $50K “Superintelligence Imagined” creative contest
🛣️ Our response to the Senate AI Roadmap
🗣️ UN Secretary-General and the Pope call for governance of AI
Californians Move AI Legislation Forward
With Congress stalled on ensuring the safety of AI systems, California legislators are increasingly demonstrating leadership on the pressing task of establishing accountability for AI harms. State Senator Scott Wiener has introduced one of the key proposed bills: SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Having recently passed out of the California Assembly Committee on Privacy and Consumer Protection and now the Assembly Judiciary Committee, Sen. Wiener’s bill is one of the leading proposals for legislation. It’s also garnered widespread public support, with a recent poll finding that 77% of Californians support the bill’s call for mandatory AI safety testing, and 86% wanting the state to prioritize developing AI safety regulations in general.
The bill would require AI companies to undertake safety testing for their largest models, in order to mitigate catastrophic risks they may present. Under the proposal, if AI products are deployed without actually meeting safety requirements, AI companies could be held liable for resulting harm.
The bill would also create a public compute resource, CalCompute, to help balance out Big Tech’s domination of the space, and allow startups, researchers, and community groups to participate more in the AI industry.
In a state home to many of the biggest AI companies, SB 1047 could set a nationwide precedent critical for ensuring that safe AI innovation can flourish in America, thanks to Senator Wiener’s leadership. Stay tuned - we’ll keep you updated on its progress and the status of AI regulation in California.
$50,000 “Superintelligence Imagined” Contest
With tech companies such as OpenAI and Meta investing incredible resources into creating artificial general intelligence that outperforms humans across most domains, it’s important for the public to understand what this could mean for them. To support this education, we’ve launched an exciting new creative contest. With our “Superintelligence Imagined” contest, we’re offering five $10,000 prizes for the best creative materials that help us address this question: “What is superintelligence, and how might it threaten humanity?”.
Any textual/visual/auditory/spatial format is welcome. We’re looking for bold, ambitious, scientifically accurate, and informative tools, with potential to reach a large audience.
Learn more about the contest and resources available to help (including a community to find teammates, if you wish to join a team) here. Entries are due August 31, with winners announced in October.
Recommendations for the Senate AI Roadmap
Our policy team have released their recommendations for legislative action in response to the U.S. Bipartisan Senate AI Working Group’s Roadmap on AI.
Covering nine domains of AI risks/harms and related topics, from AGI to deepfakes, this document provides tangible pathways to turn the vision outlined in the Roadmap into meaningful action.
Read the full recommendations here; you can also scroll through its highlights below:
🧵🆕 We've released our recommendations for implementing the U.S. Senate AI Roadmap! 👇
In the thread below check out the highlights from our recommendations, and be sure to read them in full at the 🔗 at the bottom! ⬇️
1/13 x.com/i/web/status/1…
— Future of Life Institute (@FLI_org)
8:50 PM • Jun 17, 2024
UN Secretary-General & Pope on AI
World leaders continue to voice their fears about ungoverned AI development and deployment.
UN Secretary-General António Guterres has shared several statements about the risks of ungoverned AI within the past few weeks, calling for “inclusive global governance tools” to stop us from “sleepwalk[ing] into a dystopian future.”
Artificial Intelligence is being deployed with few guardrails & little caution.
Governments, industry, academia & civil society must develop rules & guidelines for AI safety – together & before it is too late.
We have no time to waste.
— António Guterres (@antonioguterres)
8:41 PM • Jun 12, 2024
Addressing the G7 summit for the first time ever, Pope Francis - who has recently called out the challenges to humanity presented by AI with increasing frequency - urged world leaders to ensure AI stays “human-centric” and under human control. He also explicitly called for a ban on autonomous weapons systems, stating that “no machine should ever choose to take the life of a human being.”
Updates from FLI
A new page on our website outlines FLI’s core positions on AI. TL;DR?: “We oppose developing AI that poses large-scale risks to humanity, including via extreme power concentration, and favor AI built to solve real human problems.”
We’ve published a new edition of The Autonomous Weapons Newsletter, in which we debrief submissions to the UN Secretary-General’s autonomous weapons report, and much more.
Anna Hehir, FLI’s Autonomous Weapons Lead, spoke to POLITICO about a German defense tech company’s push to bring autonomous weapons to Europe and Ukraine:
“We don’t need weapons manufacturers claiming that unpredictable and unethical weapons will make Europe safer.”
FLI’s AI Safety Summit Lead Ima Bello gave Computer Weekly her review of the recent AI Seoul Summit.
The ongoing EU AI Act Newsletter, from FLI’s EU Research Lead Risto Uuk, is now available to listen to in audio form.
FLI President Max Tegmark was featured in a recent DigitalEngine video on extinction risk from AI:
Executive Director Anthony Aguirre was featured in CEOWorld and World Financial Review, discussing business leaders’ roles in balancing AI’s risks and benefits.
A part of the growing coalition, actor/activist Ashley Judd shouted out our Campaign to Ban Deepfakes to her 700,000+ Instagram & Threads followers earlier this month.
In their latest newsletter, The Elders, whom we’ve partnered with to call for action from leaders on the world’s most pressing threats, gave a summary of the panel we jointly organized in May on leadership during the climate crisis. Read that edition here.
On the FLI podcast, host Gus Docker interviewed economist Anton Korinek about all things AI and labour, and the economics of an intelligence explosion.
Founders Pledge Senior Researcher Christian Ruhl also joined for an episode about the escalating risks from U.S.-China competition, country-to-country hotlines, catastrophic biorisks, and more.
What We’re Reading
Leave it to algorithms?: In Scientific American, Tamlyn Hunt makes a strong case for keeping nuclear command, control, and communications systems (AKA the infrastructure responsible for the most powerful weapons on earth) under the control of humans, not AI.
New from our religious perspectives project: Brian Green, Director of Technology Ethics at the Markkula Center for Applied Ethics, has written the second guest post under our new Futures program highlighting religious groups’ perspectives on AI. Brian offers a Catholic vision of positive futures with “divine, human, and artificial intelligence”.
Right to Warn: Harvard Law Professor Lawrence Lessig explains why, and how, we must empower AI company whistleblowers to speak up about their safety concerns - especially while regulations on the “inherently dangerous” companies rushing to develop AGI remain absent.
DC AI lobbying explodes: Public Citizen has a new report on the staggering increase in AI lobbying in the U.S., up 120% from 2022 to 2023 alone. With 85% of AI lobbying funded by corporations or aligned trade groups trying to water down potential legislation, Public Citizen’s Michael Tanglis shared concerns about the consequences of giving in to an industry wishing to “regulate” itself.