85 seconds to disaster, while AI CEOs play chicken

Including: Davos 2026 highlights (and disappointments); ChatGPT ads; Doomsday Clock update; Trump voters want AI regulation; and more.

Welcome to the first Future of Life Institute newsletter of 2026! Every month, we bring 70,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a 10-minute read. Some of what we cover this month:

  • đź‘€ Key takeaways from Davos 2026

  • ⏰ Doomsday Clock moves closer to midnight

  • đź’˛ What introducing ads into ChatGPT could mean

  • 🙅‍♂️ Trump voters in red states push back against AI acceleration

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

The Big Three

Key updates this month to help you stay informed, connected, and ready to take action.

→ 85 seconds to midnight: Bulletin of the Atomic Scientists updated their Doomsday Clock for 2026, moving it four seconds closer to midnight - the closest humanity has ever been to global catastrophe. The Bulletin attributed the new time to “dangerous trends in nuclear risk, climate change, disruptive technologies like AI, and biosecurity”.

âťť

“Catastrophic risks are on the rise, cooperation is on the decline, and we are running out of time. Change is both necessary and possible, but the global community must demand swift action from their leaders.”

Alexandra Bell, President of the Bulletin of the Atomic Scientists

→ Davos highlights: 3,000+ leaders from the political and business worlds convened in Davos, Switzerland this past month for the annual World Economic Forum meeting. We were honoured to attend, with FLI President Max Tegmark joining historian and author Yuval Noah Harari at Bloomberg House for a discussion on human agency, governing AI, and the future of humanity.

A striking theme emerged from Davos, with AI company CEOs even talking openly about AI risk. Google DeepMind CEO Demis Hassabis advocated for "vitally needed" international AI safety standards and warned that a post-AGI job market would face "uncharted territory.” He even shared he would support a pause on advanced AI development to allow regulation and society to catch up, if other countries and companies would pause too. Anthropic's Dario Amodei acknowledged that advanced AI will likely trigger widespread unemployment and inequality.

Yet when presented with a simple solution - slowing down - Amodei rejected it. Anthropic "can't" decelerate, he argued, because competitors are racing ahead and any slowdown agreement would be unenforceable.

FLI Executive Director Anthony Aguirre pushed back against this assertion. His response: game-theoretic assurance contracts could offer one of the few remaining off-ramps from our current trajectory, as outlined below:

“I think you both fully understand that we're in a worst-case scenario as envisaged when you started in this years ago: the all-out race with zero cooperation is most likely to get us some mix of war, large-scale accident, gradual disempowerment, and uncontrolled existentially-risky singularity. This is one of the very few offramps remaining, and probably the best.”

Anthony Aguirre, responding to Dario Amodei

→ Ads in AI?: Despite CEO Sam Altman less than two years ago saying that introducing ads into AI would be a “uniquely unsettling” “last resort”, OpenAI have begun testing ads in their basic ChatGPT subscriptions tiers. With 800 million weekly users, 95% of whom have just a free plan, approximately 750 million weekly users will now see ads at the bottom of ChatGPT responses. Though OpenAI claim ads won’t affect their LLMs’ actual output to users, Emerge News point out that Google made similar claims when they introduced ads… then ultimately caved to revenue pressure.

Heads Up

Other don't-miss updates from FLI, and beyond.

→ đź“© IASEAI'26: Paris, February 24–26: Join the second annual International Association for Safe & Ethical AI (IASEAI) meeting at UNESCO House in Paris, bringing together academics, policymakers, civil society, and industry experts to help shape the future of AI. Register here.

→ Trump voters against AI accelerationism: A new survey from the Institute for Family Studies finds that Trump voters in red states widely support candidates who will regulate AI companies, while opposing those with policies to accelerate AI development.

Key takeaways:

  • Americans support holding AI companies liable for harms.

  • AI acceleration is a political loser for Republicans, especially in red states.

  • Comparing strong messaging about AI, voters more strongly agree with anti-acceleration statements by Pope Leo XIV and Senator Josh Hawley than with pro-acceleration AI investor Marc Andreessen.

→ We’re hiring: Our operations team is growing! We're looking for a highly communicative generalist who can take ownership over 1) legal, compliance, and contracts, and 2) planning our bi-annual staff retreats. If this sounds like you, learn more and apply here by February 23rd.

→ Max in the Wall Street Journal: FLI President Max Tegmark was profiled in the Wall Street Journal, discussing the origins of FLI and our latest work trying to steer AI towards benefitting humanity.

→ New from Digital Engine: Our friends at Digital Engine released a new video showing the “dangerous new leap” made by AI - watch it below:

→ On the FLI Podcast, host Gus Docker was joined by:

  • Nora Ammann, technical specialist at the Advanced Research and Invention Agency, to discuss how to steer a slow AI takeoff toward resilient and cooperative futures.

  • Oly Sourbut, researcher at the Future of Life Foundation, to discuss how AI could help humanity reason better.

  • Deric Cheng, Director of Research at the Windfall Trust, to discuss how AI could reshape the social contract and global economy.

We also released two new highlight reels, from “Nuclear War” author Annie Jacobsen’s FLI Podcast episode covering a second-by-second timeline of a nuclear war breaking out; and computer scientist Ben Goertzel’s episode on facing superintelligence and what differentiates the current “AI boom”.