Future of Life Institute Newsletter: Where are the safety teams?

Plus: Online course on worldbuilding for positive futures with AI; new publications about AI; our Digital Media Accelerator; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a nine-minute read. Some of what we cover this month:

  • 🚫 AI companies are sacrificing safety for the AI race

  • šŸ—ļø ā€œWorldbuilding Hopeful Futures with AIā€ course

  • 🤳 Reminder: Apply to our Digital Media Accelerator!

  • šŸ—žļø New AI publications to share

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

OpenAI, Google Accused of New Safety Gaps

ā

ā€œWe had more thorough safety testing when [the technology] was less important.ā€

One of the testers of OpenAI’s o3 model, in Financial Times.

As the race to dominate the AI landscape accelerates, serious concerns about Big Tech’s commitment to safety are mounting.

Recent reports reveal that OpenAI has drastically reduced the time spent on safety testing before releasing new models, with the Financial Times reporting that testers, both from staff and third party groups, have now been given only days to conduct evaluations that previously would’ve taken months. In a double whammy, OpenAI also announced they will no longer evaluate their models for mass manipulation and disinformation as critical risks.

Google and Meta have also come under fire in the past few weeks for similarly concerning approaches to safety. Despite past commitments to public security, neither Google’s new Gemini Pro 2.5 nor Meta’s new Llama 4 open models were released with important safety details included in their technical reports and evaluations.

All of these developments - perhaps better described as regressions - point to a concerning shift away from caution about AI risk… despite models getting more and more powerful. In Fortune, journalist Jeremy Kahn answered why: ā€œThe reason... is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market.ā€

Apply to our Digital Media Accelerator!

A reminder that last month we launched our Digital Media Accelerator, to fund creators looking to produce content, grow channels, and spread the word to new audiences about complex AI issues (e.g., loss of control to AGI, misaligned goals, and Big Tech power concentration) to new audiences. We’re looking to fund content across platforms, such as YouTube explainers, TikTok series, podcasts, newsletters, and more.

Already have an audience? Want to create compelling content about AI risks?
We’re accepting applications on a rolling basis - apply here and help shift the conversation. Please share widely with anyone you think may be interested!

Updates from FLI

  • FLI President Max Tegmark spoke to the Daily Caller about the global ā€˜AI arms race’ that’s intensifying: ā€œWhether this ends in a global AI renaissance or in disaster depends on choices we make now. I’d much rather see cooperation - even just basic communication - than a reckless contest to see who can deploy an uncontrollable AI first."

  • We’re thrilled to have supported the development of this new (free!) online course about shaping positive futures with AI, from the Foresight Institute’s Existential Hope program. Check it out now, and keep an eye out for FLI Executive Director Anthony Aguirre in the course content!

  • FLI’s AI & National Security Lead Hamza Chaudhry spoke to Reuters about AI disruption, especially regarding human labor displacement:

  • Hamza also spoke to Politico about the self-improving AI models tech companies are racing to build, stating ā€œthe product being sold is the lack of human supervision — and that’s the most alarming development here.ā€

  • We’re proud to have supported the latest video from Pindex outlining experts’ predictions about AI:

  • We’ve published the latest edition of Anna Hehir and Maggie Munro’s Autonomous Weapons Newsletter, in advance of upcoming UN AWS talks.

  • On the FLI podcast, Astera Institute AGI Safety Researcher Steven Byrnes joined host Gus Docker to discuss brain-like AGI, and its dangers.

  • Also on the podcast, Foresight Institute’s Allison Duettman joined for an episode on the pros and cons of centralized vs. decentralized AI, and how to build human-empowering AI.

  • Finally, George Washington University Assistant Professor of Political Science Jeffrey Ding joined to discuss China’s AI strategy.

What We’re Reading

  • AI Frontiers: From the Center for AI Safety and advised by Lawrence Lessig, Yoshua Bengio, and Stuart Russell, AI Frontiers is a new digital platform featuring expert dialogue and debate about hot AI topics. Check out their fast-growing collection here.

  • AI 2027: Former OpenAI researcher Daniel Kokotajlo, along with Scott Alexander, Eli Lifland, and Thomas Larsen, published a chilling look at how AI could realistically take over by 2027. Explore it in ā€œAI 2027ā€, here.

  • Demonstrating dual-use AI: SecureBio and the Center for AI Safety recently released the Virology Capabilities Test, a benchmark measuring LLMs’ ability to problem-solve in wet labs handling biological and chemical materials. For example, the test found that OpenAI’s new o3 model outperformed 90%+ of human experts - highlighting the potential for widely-available AI models to aid in both helpful lab work and harmful bioweapon production. You can read more in their paper, in TIME’s coverage, and in AI Frontiers.

  • There’s room for every concern: A new study published in PNAS finds that "existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms" - in contrast to claims that talking about AI's catastrophic risks distract from the harms it's already causing.

  • Scaling laws for scalable oversight: FLI President Max Tegmark, along with Joshua Engels, David D. Baek, and Subhash Kantamneni, published a new paper trying to ā€œquantify how smarter AI can be controlled by dumber AI and humans via nested ā€˜scalable oversight’". You can read it now, here.