Future of Life Institute Newsletter: Our Pause Letter, Six Months Later

Reflections on the six-month anniversary of our open letter, our UK AI Safety Summit recommendations, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 40,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe here?

Today's newsletter is a 6-minute read. We cover:

  • ⏸️ Six months since our pause letter

  • 📰 FLI’s UK AI Safety Summit recommendations

  • 📻 Out now: New episodes of the Imagine A World podcast

  • 🏫 Our Buterin PhD and postdoc fellowship applications are open

FLI’s Pause Letter, Six Months Later

September 22nd marked six months since we released our open letter calling for a six-month pause on large-scale AI development. Garnering worldwide media coverage and 30,000+ signatures - including many from leading experts, researchers, and industry figures - the letter made a notable impact, kickstarting the conversation around existential risks from AI.

Despite this, a pause never materialized - if anything, the “arms race” between AI corporations has only intensified in the last six months. In our pause letter expiration statement, eight key signatories reflected on what has happened, and what needs to happen next during this critical, shrinking window of opportunity.

We need to keep the pace up and cannot slacken now.

Dr. Danielle Allen on the six-month FLI pause letter anniversary.

We’ve also released the following video calling for U.S. lawmakers to step in to regulate AI, before it’s too late.

Find more media coverage from the pause letter anniversary below:

Our Recommendations for the UK AI Safety Summit

The UK AI Safety Summit from November 1-2 presents the biggest opportunity yet for global coordination on AI regulation - if organizers can prevent it from falling into the following three traps as identified by FLI:

  1. Letting tech companies write the legislation.

  2. Turning this into a geopolitical contest of the West versus China.

  3. Focusing only on existential threats, or conversely, only on current events.

We propose the following three high-level objectives for the Summit:

  1. Establish a common understanding of the severity and urgency of AI risks.

  2. Make the global nature of the AI challenge explicit, recognizing that all of humanity has a stake in this issue, and that some solutions require a unified global response.

  3. Embrace the need for urgent government intervention, including through hard law.

First published in Politico Pro, our full list of recommendations for the Summit, including a post-Summit road map, can be found here.

Have You Listened to ‘Imagine A World’ Yet?

‘Imagine A World’ is our new podcast series exploring eight different positive visions of what our reality could look like in 2045.

From greater inclusivity, to new democratic models, to solutions for the climate crisis, these worlds capture a plausible future in which human lives are transformed for the better by technology such as advanced AI, rather than disempowered by it.

The first five of eight episodes are now available here.

Tune in on YouTube, Spotify, or Apple Podcasts, and be sure to like and subscribe! We’ll be releasing a new episode every week for the next three weeks.

Apply Now for our Vitalik Buterin PhD and Postdoctoral Fellowships!

A reminder that applications are open for our PhD and postdoctoral fellowships focused on AI existential safety research. The fellowship is global and open to all regardless of nationality or background; we are seeking a diverse applicant pool. All Fellows will receive applicable tuition and fees, as well as a stipend and research/travel fund. Please share this excellent opportunity widely.

Current or future PhD student intending to research AI existential safety? The deadline for PhD fellowship applications is November 16, 2023 at 11:59 pm ET.

Current or future postdoc working on AI existential safety research? The deadline for postdoctoral fellowship applications is January 2, 2024 at 11:59 pm ET.

Governance and Policy Updates

AI policy:

  • As encouraged by FLI, the British Government has confirmed that China has been invited to the UK AI Safety Summit.

  • Reflective of the greater shift happening in conversations about AI safety, European Commission President Ursula von der Leyen’s 2023 State of the Union address referred to possible extinction risk from AI - and our “narrowing window of opportunity” to guide AI development responsibly.

  • The first meeting of U.S. Senate Majority Leader Chuck Schumer’s AI Insight Forum was held September 13th, with over a dozen tech executives convened to help inform lawmakers on AI. FLI’s Landon Klein was interviewed about it on NBC News NOW - watch it here at the 7:43 mark.

Biosecurity:

  • The 78th session of the UN General Assembly has wrapped up. Notably, this session included the Assembly’s first ever high-level meeting on pandemic prevention, preparedness, and response, resulting in the adoption of a political declaration to strengthen international collaboration and coordination on the topic.

Updates from FLI

  • FLI’s Anna Hehir spoke to The Hill about the “Pandora’s box” of AI in military use.

  • FLI President Max Tegmark was honoured as one of TIME’s 100 Most Influential People in AI 2023.

  • FLI’s Carlos Ignacio Gutierrez, Risto Uuk, and Anthony Aguirre, along with Claire C. Boine and Matija Franklin, published an article proposing a functional definition of the term “general purpose AI systems”.

  • We’ve built a compliance checker tool on our dedicated EU AI Act website, for “providers” and “deployers” of AI systems to assess which legal obligations their system may be subject to once the Act comes into effect.

  • We’ve created a compilation of introductory resources on AI risk for those who are new to conversations around AI and AI risks.

  • On the FLI podcast, host Gus Docker spoke to Tom Davidson from Open Philanthropy about AI takeoff speeds. He also interviewed Johannes Ackva from Founders Pledge about the main causes of, and most promising solutions for, climate change.

New Research: Provably safe AI systems

A path to controllable AGI?: Steve Omohundro and FLI’s Max Tegmark published a paper outlining how mathematical proofs of safety, e.g., written into “proof-carrying code”, are powerful, accessible tools for developing safe artificial general intelligences (AGIs).

Why this matters: As Omohundro and Tegmark argue, creating these “provably safe systems” is the only way we can guarantee safe, controlled AGI, in contrast to the possible extinction risk presented by unsafe/uncontrolled AGI.

What We’re Reading

  • AGI Safety Weekly: A new weekly publication tracks and shares weekly updates on AI Safety - check it out and subscribe below:

  • New AIPI polling: More polling has recently come out from the AI Policy Institute (AIPI), showing that Americans don’t want superintelligent AI, and favour restricting AI development - even at the risk of China outpacing the U.S.

  • Evaluating international AI institutions: Matthijs M. Maas and José Jaime Villalobos present a literature review of models, examples, and proposals for governance through international AI institutions.

  • Lessons from Oppenheimer: In TIME Magazine, Conjecture’s Andrea Miotti makes a compelling case for governing AI development like was done with nuclear development.

  • What we’re watching: This video from 80,000 Hours captures the existential risks presented by AI, and outlines what we can do to mitigate them.

Hindsight is 20/20

"Stanislav Yevgrafovich Petrov" by Stanislav Petrov is marked with CC0 1.0.

On September 26th 1983, a USSR army officer named Stanislav Petrov may have saved the world.

Petrov, monitoring an early-warning missile detection system, was alerted of five incoming U.S. missiles.

He had a feeling something was off, judging that the U.S. would not start an attack with only five missiles. However, he had mere moments to react, as protocol was to report the alerts up the chain of command - almost certainly resulting in his superiors launching nuclear missiles in response.

He decided to ignore the system and report the warnings as false alarms, which was later confirmed. If Petrov had blindly followed the system’s report, it’s likely a catastrophic nuclear exchange would have resulted.

This terrifying near-disaster is just one of many examples highlighting why we must keep under human control decisions like Petrov had to make.

Our recent short film, "Artificial Escalation", explores what can happen if we entrust such consequential decisions to algorithms. Let us heed these warnings from the past and work to safeguard our future.