- Future of Life Institute Newsletter
- Posts
- Wrapping up 2025
Wrapping up 2025
Including: our Winter 2025 AI Safety Index; NY's new AI safety law; White House AI Executive Order; results from our Keep the Future Human contest; and more!

Welcome to the Future of Life Institute newsletter, holiday edition! Every month, we bring 70,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a 12-minute read. Some of what we cover this month:
🗂️ Our Winter 2025 AI Safety Index
🏛️ New York signs RAISE Act into law
📝 President Trump’s preemption Executive order
🎨 The exciting results from our Keep the Future Human creative contest
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
The Big Three
Key updates this month to help you stay informed, connected, and ready to take action.
→ Winter 2025 AI Safety Index: We released the third edition of our AI Safety Index, in which a panel of independent experts graded the safety and security practices of eight leading AI companies: OpenAI, Anthropic, Meta, DeepSeek, xAI, Z.ai, Google DeepMind, and Alibaba Cloud.
Find key takeaways from the report below and the entire report here, which was covered by Forbes, The Independent, Fortune, the Los Angeles Times, Reuters, Axios, and others:
A clear divide persists between the top performers (Anthropic, OpenAI, and Google DeepMind) and the rest of the companies reviewed (Z.ai, xAI, Meta, Alibaba Cloud, DeepSeek). The most substantial gaps exist in the domains of risk assessment, safety framework, and information sharing, caused by limited disclosure, weak evidence of systematic safety processes, and uneven adoption of robust evaluation practices.
Existential safety remains the industry’s core structural weakness. All of the companies reviewed are racing toward AGI/superintelligence without presenting any explicit plans for controlling or aligning such smarter-than-human technology, thus leaving the most consequential risks effectively unaddressed.
Despite public commitments, companies’ safety practices continue to fall short of emerging global standards. While many companies partially align with these emerging standards, the depth, specificity, and quality of implementation remain uneven, resulting in safety practices that do not yet meet the rigor, measurability, or transparency envisioned by frameworks such as the EU AI Code of Practice.
→ Preemption Update: Despite widespread, bipartisan opposition, the White House issued a legally dubious Executive Order in December directing federal agencies to discourage “burdensome” state-level AI laws by, for example, potentially withholding federal funding or challenging them in court. FLI’s Head of U.S. Policy, Michael Kleinman, shared the following in response:
This David Sacks-led executive order is a gift for Silicon Valley oligarchs who are using their influence in Washington to shield themselves and their companies from accountability. No other industry operates without regulation and oversight, be it drug manufacturers or hair salons; basic safety measures are not just expected, but legally required. AI companies, in contrast, operate with impunity. Unregulated AI threatens our children, our communities, our jobs and our future.
Americans across the political spectrum - including both Republicans and Democrats - are overwhelming in favor of reining in major AI companies, and decisively oppose preventing states from taking action to regulate them. The Senate rejected preemption by a vote of 99-to-1 in July, and last month, Republicans and Democrats in Congress defeated a second attempt by Big Tech to block state AI regulation after another public backlash, particularly among conservative state lawmakers. There is no democratic mandate for this kind of preemption by executive fiat, and American families and communities deserve better.
→ RAISE Act passes: In good news out of the U.S., New York governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The Act’s passing is a huge win, making New York the second state after California to regulate the development of advanced AI systems.
The final version of the Act requires developers of the industry’s most advanced AI models to “describe in detail how they handle” each element of their frontier AI framework and report critical safety incidents within 72 hours, with $1 million+ penalties for violations. It also creates an oversight office within the Department of Financial Services to assess frontier models and implement the Act. It remains to be seen how the new Executive Order might affect the RAISE Act, but we’ll keep you in the loop.
Heads Up
Other don't-miss updates from FLI, and beyond.
→ Keep the Future Human Creative Contest: We’re thrilled to announce the winners from our Keep the Future Human Creative Contest!
From 300+ submissions depicting FLI Executive Director Anthony Aguirre's "Keep the Future Human" essay and the hopes and challenges of a human future in the age of advanced AI, we've awarded over $100,000 in total across 5 grand prize winners, 10 runners-up, and 9 special prize winners.
Explore the grand prize winning projects below, and the complete list of winners here.
1. The AI After Tomorrow by Beatrice Malfa and Paolo Tognozzi. A cooperative print-and-play board game for 2 players, inspired by the real world risks and complexities of AGI. Players must work together to contain the rise of an AGI before it becomes uncontrollable and reshapes humanity’s future. The goal is to discover the four Solutions to contain AGI before six Consequences occur, threatening humanity itself.
2. The Alignment Game: Can You Keep the Future Human? by Radina Kraeva. Step into the shoes of an AI Policy Czar in this interactive simulation set in 2025-2026. Navigate seven pivotal decisions on compute limits, AI autonomy, liability, and global collaboration. Every choice you make ripples through the world, shifting AGI risk, public trust, and international cooperation, and steering the story toward one of six possible futures.
3. Will AI Destroy Humanity? by Vin Sixsmith and Renzo Stadhouder. A 3D animated walkthrough exploring the dangerous AGI race and how we can choose a safer path. Features visual storytelling that makes complex AI safety concepts accessible, covering the four measures to prevent uncontrolled AGI and what actions you can take to help keep the future human.
4. The Choice Before Us by Nick Shapiro. The Choice Before Us is an interactive narrative game where players run an AI startup and confront the same escalating pressures described in Keep The Future Human. As they unlock extraordinary breakthroughs for humanity, rising autonomy, generality, and intelligence push their systems toward the AGI threshold.
5. The Button by Vaibhav Jain. What if the people building AGI don't want to build it? A short story told from the perspective of an AI alignment researcher at a fictional leading AI lab. She's part of the race toward artificial general intelligence - and she is terrified of winning. This story explores the most overlooked perspective in the AI safety conversation: the people inside the companies.
→ New religious RFP: With our new Request for Proposals on religious projects tackling the challenges posed by the AGI race, we’re looking to support initiatives focused on educating and engaging different specific religious groups, bringing them to the table in the fight for a positive AI future, or public outreach and organization on AI issues at a religious grassroots level, helping them to make their voices heard and protect their communities and values. Learn more and apply by February 2 here.
→ Reminder: Apply for FLI postdoc fellowships: Applications for our technical postdoc fellowships are due January 5. Don’t forget to apply for this opportunity to receive an annual $80,000 stipend, additional research fund, and extensive networking opportunities.
→ Are we ready for this?: In a new video, InsideAI put an AI ‘girlfriend’ into a humanoid robot, prompting stark warnings about what dangers might await us on humanity’s current path with AI:
On the FLI Podcast, host Gus Docker was joined by:
→ Steven Adler, former OpenAI safety researcher, to discuss how the AI race undermines AI safety.
→ David Duvenaud, associate professor of computer science and statistics at the University of Toronto, to discuss how humans could lose power without an AI takeover.
We also released two new highlight reels, from Conjecture CEO Connor Leahy’s FLI Podcast episode on why AGI threatens human extinction, and AI 2027 co-author Daniel Kokotajlo’s episode on how the AI race ends in disaster.





