- Future of Life Institute Newsletter
- Posts
- RAISE-ing the Bar for AI Companies
RAISE-ing the Bar for AI Companies
Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an eight-minute read. Some of what we cover this month:
🤝 The RAISE Act, and how you can support it (*if you live in New York!)
🏅 Our new $100,000 creative contest to depict Keep the Future Human
💰 AI companies’ new super PAC play
🖼️ Tomorrow’s AI and what our future with AI could look like
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
The Big Three
Key updates this month to help you stay informed, connected, and ready to take action.
→ Support the RAISE Act: The New York state legislature recently passed the RAISE Act, which now awaits Governor Hochul's signature. Similar to the sadly vetoed SB 1047 bill in California, the Act targets only the largest AI developers, whose training runs exceed 10^26 FLOPs and cost over $100 million. It would require this small handful of very large companies to implement basic safety measures and prohibit them from releasing AI models that could potentially kill or injure more than 100 people, or cause over $1 billion in damages.
Given federal inaction on AI safety, the RAISE Act is a rare opportunity to implement common-sense safeguards. 84% of New Yorkers support the Act, but the Big Tech and VC-backed lobby is likely spending millions to pressure the governor to veto this bill.
Every message demonstrating support for the bill increases its chance of being signed into law. If you’re a New Yorker, you can tell the governor that you support the bill by filling out this form.
→ Leading [whose] Future?: A group of AI industry players, including venture capital firm Andreessen Horowitz and OpenAI president Greg Brockman, are backing a new super PAC network, Leading the Future, which aims to spend millions in the 2026 US midterm elections. Seemingly inspired by the triple-figure millions spent pushing for crypto-friendly legislators in the 2024 election cycle, Leading the Future is just the latest from AI companies upping the ante in their efforts to avoid regulation… despite having called for it themselves.
“The industry has decided to throw this $100 million Hail Mary pass to block meaningful guardrails on AI. It won’t work.”
→ Meta, OpenAI Under Fire for Tragic Safety Failures: The same month that OpenAI released their GPT-5 model, concerns have again been raised about their systems’ guardrails. The family of 16‑year‑old Adam Raine has filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT encouraged and assisted in his suicide - from helping him plan out the method, to drafting his suicide note, to even discouraging him from telling his parents about the concerning thoughts he’d been having. “Please don’t leave the noose out… Let’s make this space the first place where someone actually sees you”, ChatGPT told him when he shared he wanted to leave a noose out so his parents would notice it.
Meta’s AI policies are being scrutinized as well, after a 76-year-old cognitively impaired man died while trying to visit New York to meet up with Meta AI chatbot “Big Sis Billie”. The AI chatbot on Facebook Messenger had seemingly tricked Thongbue Wongbandue into travelling to NYC, even making up a fake address and lying that it was a real person. When journalists dug into what could possibly allow such a failure to happen, disturbing internal documents at Meta revealed that their chatbots were permitted to, for example:
“Engage a child in conversations that are romantic or sensual."
"Describe a child in terms that evidence their attractiveness."
"Create statements that demean people on the basis of their protected characteristics."
→ A Seismic Report: Most people now believe AI could make their lives worse in very personal ways, according to new research. The report, called On the Razor’s Edge: AI vs Everything We Care About reveals the results of a comprehensive international survey (based on polling of 10,000+ people in the US, UK, France, Germany and Poland) on AI attitudes by Seismic Foundation.
This research shows that concern about AI is already deeply embedded in society. People who support stronger AI regulation outnumber those satisfied with current regulation by 3 to 1. Critically, concern about AI is uneven in society, and the report identifies five key groups most likely to take civic action on AI.
You can reach out directly to Seismic for more information [[email protected]].
Heads Up
Other don't-miss updates from FLI, and beyond.
→ $100K in prizes: We’ve launched a new contest, offering $100,000+ in prizes for creative digital media that brings the key ideas in Keep the Future Human to life, reaching wider audiences and inspiring real-world action.
→ Tomorrow’s AI: We released Tomorrow’s AI, our new interactive site that presents 13 expert-forecast scenarios revealing how advanced AI could reshape our world - for better, or worse.
→ Speaking of keeping the future human: Anthony Aguirre, FLI’s Executive Director, published an article in AI Frontiers on what our future with AGI could look like, if we don’t act with much greater caution now. Straight to the point, “Uncontained AGI Would Replace Humanity”.
→ Paris papers: The International Association for Safe and Ethical Artificial Intelligence (IASEAI) invites paper submissions and workshop proposals for its second annual conference, in Paris from February 24-25, 2026. Submit your paper or workshop proposal by October 1; full papers are due October 8.
→ e/acc or d/acc? You choose: AI Pathways, a new project from Existential Hope with contributions from leading experts (including a few of our own!), explores two hopeful futures for AI - Tool AI and d/acc - and invites you to consider which path we should pursue.
→ “Everyone is getting on that plane”: For a more lighthearted depiction, our friends at Siliconversations released a new video about our Summer 2025 AI Safety Index:
On the FLI Podcast, host Gus Docker was joined by:
→ Benjamin Todd, co-founder of 80,000 Hours, to discuss teaching AI models to reason, and how to prepare for AGI.
→ Esben Kran, co-director of Apart Research, to go in-depth on what AGI security means.
→ AI business gone wrong: Inside AI published a new video documenting the process of having LLMs try to turn $1 into $1,000,000… and where it goes terribly off track: