- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Where are the safety teams?
Future of Life Institute Newsletter: Where are the safety teams?
Plus: Online course on worldbuilding for positive futures with AI; new publications about AI; our Digital Media Accelerator; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a nine-minute read. Some of what we cover this month:
š« AI companies are sacrificing safety for the AI race
šļø āWorldbuilding Hopeful Futures with AIā course
𤳠Reminder: Apply to our Digital Media Accelerator!
šļø New AI publications to share
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
OpenAI, Google Accused of New Safety Gaps
āWe had more thorough safety testing when [the technology] was less important.ā
As the race to dominate the AI landscape accelerates, serious concerns about Big Techās commitment to safety are mounting.
Recent reports reveal that OpenAI has drastically reduced the time spent on safety testing before releasing new models, with the Financial Times reporting that testers, both from staff and third party groups, have now been given only days to conduct evaluations that previously wouldāve taken months. In a double whammy, OpenAI also announced they will no longer evaluate their models for mass manipulation and disinformation as critical risks.
Google and Meta have also come under fire in the past few weeks for similarly concerning approaches to safety. Despite past commitments to public security, neither Googleās new Gemini Pro 2.5 nor Metaās new Llama 4 open models were released with important safety details included in their technical reports and evaluations.
All of these developments - perhaps better described as regressions - point to a concerning shift away from caution about AI risk⦠despite models getting more and more powerful. In Fortune, journalist Jeremy Kahn answered why: āThe reason... is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market.ā
Apply to our Digital Media Accelerator!
A reminder that last month we launched our Digital Media Accelerator, to fund creators looking to produce content, grow channels, and spread the word to new audiences about complex AI issues (e.g., loss of control to AGI, misaligned goals, and Big Tech power concentration) to new audiences. Weāre looking to fund content across platforms, such as YouTube explainers, TikTok series, podcasts, newsletters, and more.
Already have an audience? Want to create compelling content about AI risks?
Weāre accepting applications on a rolling basis - apply here and help shift the conversation. Please share widely with anyone you think may be interested!
Updates from FLI
FLI President Max Tegmark spoke to the Daily Caller about the global āAI arms raceā thatās intensifying: āWhether this ends in a global AI renaissance or in disaster depends on choices we make now. Iād much rather see cooperation - even just basic communication - than a reckless contest to see who can deploy an uncontrollable AI first."
Weāre thrilled to have supported the development of this new (free!) online course about shaping positive futures with AI, from the Foresight Instituteās Existential Hope program. Check it out now, and keep an eye out for FLI Executive Director Anthony Aguirre in the course content!
FLIās AI & National Security Lead Hamza Chaudhry spoke to Reuters about AI disruption, especially regarding human labor displacement:
Hamza Chaudhry, FLI's AI and National Security Lead, spoke to @Reuters about increasingly-widespread concerns regarding AI - including how it will likely displace human labor:
ā Future of Life Institute (@FLI_org)
9:52 PM ⢠Apr 8, 2025
Hamza also spoke to Politico about the self-improving AI models tech companies are racing to build, stating āthe product being sold is the lack of human supervision ā and thatās the most alarming development here.ā
Weāre proud to have supported the latest video from Pindex outlining expertsā predictions about AI:
Weāve published the latest edition of Anna Hehir and Maggie Munroās Autonomous Weapons Newsletter, in advance of upcoming UN AWS talks.
On the FLI podcast, Astera Institute AGI Safety Researcher Steven Byrnes joined host Gus Docker to discuss brain-like AGI, and its dangers.
Also on the podcast, Foresight Instituteās Allison Duettman joined for an episode on the pros and cons of centralized vs. decentralized AI, and how to build human-empowering AI.
Finally, George Washington University Assistant Professor of Political Science Jeffrey Ding joined to discuss Chinaās AI strategy.
What Weāre Reading
AI Frontiers: From the Center for AI Safety and advised by Lawrence Lessig, Yoshua Bengio, and Stuart Russell, AI Frontiers is a new digital platform featuring expert dialogue and debate about hot AI topics. Check out their fast-growing collection here.
AI 2027: Former OpenAI researcher Daniel Kokotajlo, along with Scott Alexander, Eli Lifland, and Thomas Larsen, published a chilling look at how AI could realistically take over by 2027. Explore it in āAI 2027ā, here.
Demonstrating dual-use AI: SecureBio and the Center for AI Safety recently released the Virology Capabilities Test, a benchmark measuring LLMsā ability to problem-solve in wet labs handling biological and chemical materials. For example, the test found that OpenAIās new o3 model outperformed 90%+ of human experts - highlighting the potential for widely-available AI models to aid in both helpful lab work and harmful bioweapon production. You can read more in their paper, in TIMEās coverage, and in AI Frontiers.
Thereās room for every concern: A new study published in PNAS finds that "existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms" - in contrast to claims that talking about AI's catastrophic risks distract from the harms it's already causing.
Scaling laws for scalable oversight: FLI President Max Tegmark, along with Joshua Engels, David D. Baek, and Subhash Kantamneni, published a new paper trying to āquantify how smarter AI can be controlled by dumber AI and humans via nested āscalable oversightā". You can read it now, here.
Our new paper tries to quantify how smarter AI can be controlled by dumber AI and humans via nested "scalable oversight". Our best scenario successfully oversees the smarter AI 52% of the time, and the success rate drops as one approaches AGI. My assessment is that the "Compton
ā Max Tegmark (@tegmark)
2:04 PM ⢠Apr 30, 2025