Future of Life Institute Newsletter: A pause didn't happen. So what did?

Reflections on the one-year Pause Letter anniversary, the EU AI Act passes in EU Parliament, updates from our policy team, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an 11-minute read. Some of what we cover this month:

  • ⏸️ Our pause letter, one year later.

  • 🇪🇺 European Parliament has passed the EU AI Act.

  • 📫 The first edition of our Autonomous Weapons Newsletter.

  • 🌐 Governance and policy updates from around the world.

The Pause Letter: One Year Later

As of March 22nd, it’s now been a year - and quite a year - since we released our open letter calling for a six-month pause on giant AI experiments, making global headlines and helping propel forward discussions around AI risks.

Even AI companies that take safety seriously have adopted the approach of aggressively experimenting until their experiments become manifestly dangerous, and only then considering a pause. But the time to hit the car brakes is not when the front wheels are already over a cliff edge. Over the last 12 months developers of the most advanced systems have revealed beyond all doubt that their primary commitment is to speed and their own competitive advantage. Safety and responsibility will have to be imposed from the outside. It is now our lawmakers who must have the courage to deliver – before it is too late.

FLI Executive Director Anthony Aguirre, “The Pause Letter: One year later

Despite a pause not materializing, an unbelievable amount of progress and momentum has developed over the past year. Complementing the reflections from signatories that we shared on the letter’s six-month anniversary, we’ve compiled a list on X of some key developments since the letter was released:

Read FLI Executive Director Anthony Aguirre’s reflections on the one-year anniversary in full.

European Parliament adopts the AI Act!

Years of dedication and hard work have (finally!) resulted in the adoption of the world’s first comprehensive AI law. On March 13, European Parliamentarians voted overwhelmingly in favour of the EU AI Act. This landmark piece of legislation is set to support European innovation whilst protecting citizens from the myriad risks and harms AI presents - as long as it’s implemented thoughtfully and effectively.

Pending final checks before the Act is formally endorsed by the European Council, the new European AI Office will be a key body working on its implementation and enforcement. As the Office ramps up recruitment for open roles, Dragoș Tudorache, the co-rapporteur on the AI Act, emphasized the need to staff it with AI “Oppenheimers”.

Looking to the next European Commission mandate as the current term nears its end, FLI’s policy team has published key recommendations for the Act’s successful implementation.

Check out the summary below, and read them in full here.

Updates from our Autonomous Weapons team

  • The inaugural edition of our Autonomous Weapons Newsletter is now available!

    In the newsletter, FLI’s Anna Hehir and Maggie Munro (👋) cover everything related to autonomous weapons systems from the past month, including our new weapons database (more on this below), policymaking efforts, recent and upcoming conferences, and more.

    If you missed out on receiving it in your inbox, don’t worry - you can read it online now. Be sure to subscribe for future editions!

  • The autonomous weapons database.

    With our new Autonomous Weapons Watch database, we aim to inform the general public, journalists and policymakers about current autonomous weapons capabilities and those in development.

  • 17-18 April: Freetown conference.

    Sierra Leone is hosting an Economic Community of West African States (ECOWAS) conference on a regional perspective of autonomous weapons systems, as work towards an international treaty continues to advance. Learn more about the conference and find details on attending here.

  • 29-30 April: Vienna conference.

    Austria will host the first ever global conference on autonomous weapons, bringing together states, experts, civil society, academia, industry and the media to discuss the most pressing questions surrounding autonomous weapons regulation.

    Register to attend by 8 April; sponsorship is available.

Governance and Policy Updates

  • In a welcome show of cross-border cooperation, the International Dialogues on AI Safety in Beijing earlier this month brought together a cohort of leading global AI experts from China and the west to discuss how to mitigate AI’s extreme risks, including ‘red lines’ never to be crossed.

  • In the 2024 State of the Union address, U.S. President Joe Biden called upon Congress to pass legislation regulating AI in order to “harness the promise of AI and protect us from its peril”.

  • The U.S. Intelligence Community’s annual threat assessment referenced both ongoing harms (e.g., rampant misinformation) and catastrophic risks (e.g., enabling the creation of chemical weapons) as examples of the “growing threat to national security” presented by the pace of AI development.

  • The latest in a series of decisions and reversals, India’s Ministry of Electronics and IT is now reversing its previous advisory which required tech companies to receive government approval before deploying new AI systems.

Updates from FLI

  • As part of our work advocating for an international treaty on autonomous weapons, we’re seeking a project lead to create demonstrations of autonomous drone swarms. Submit your proposal for this project by May 12.

  • We’ve published two recent policy papers with recommendations directed towards the European Commission.

    • The first, covered in Politico, provides feedback to the Commission’s consultation on competition in the generative AI market.

    • As mentioned earlier, we’ve also laid out our recommendations for the next Commission mandate, specifically on ensuring successful implementation of the EU AI Act.

  • FLI President Max Tegmark, with The Elders chair and former Irish President Mary Robinson, wrote a joint op-ed in Le Monde reflecting on the need for “political courage” to urgently address the most pressing issues the world faces. If you haven’t signed our shared open letter with The Elders reflecting the same call to action, you can add your name here.

  • At SXSW, Emilia Javorsky, Director of FLI’s Futures program, spoke on a panel about AI and the future of loneliness. FLI’s Executive Director Anthony Aguirre also joined Frances Haugen, Jeffrey Ladish and Emily Schwartz for a discussion on the interplay of AI, nuclear weapons, and social media.

  • Max spoke to Politico Europe about the increasing public concern - and demand for action - about AI’s risks and harms.

  • Anthony was quoted in Semafor about the risks of expanding the definition of “AI safety”, and the precision needed to thoroughly address AI-related risks and harms.

  • ANSA Europa covered our report looking at the potential impact of the provisional Violence Against Women Directive on establishing deepfake liability in Europe.

  • On the FLI podcast, host Gus Docker interviewed Holly Elmore, from Pause AI, about pausing frontier AI, the social dynamics of AI risk, and cooperation on AI/AI safety. He also spoke to AI Impacts founder Katja Grace about the results from the largest-ever survey of AI researchers.

What We’re Reading

  • U.S. advised to move “quickly and decisively”: A new report commissioned by the U.S. State Department warns that the U.S. must take urgent action, e.g., through a ban on training AI models using beyond a certain level of computing power, to avert up to “extinction-level” threats presented by AI.

  • At the Brink: A new multimedia series from the New York Times examines the modern nuclear threat, and the volatile future we face as nuclear arsenals continue to grow amidst global instability.

  • Making a deepfake is easier than ever: The Times’ science reporter clones his coworker’s voice to explore just how easy it is to make an eerily convincing audio deepfake. Spoiler alert: it took him only six minutes.

  • Americans united on deepfakes: With AI-enabled image generators becoming more advanced and accessible, new polling from the AI Policy Institute finds that two-thirds of Americans polled believe that AI model developers should be legally liable for their model’s actions. Read the full results here.

New Research: How do we evaluate “dangerous capabilities” in frontier models?

“Dangerous capability” evaluations: New research from Google DeepMind builds on existing literature to introduce a new methodology for evaluating dangerous capabilities presented by specific AI models. Their evaluation suite breaks down these capabilities into four categories: persuasion and deception, cybersecurity, self-proliferation, and self-reasoning.

Why this matters: This research has a larger scope in mitigating risks from AI - as noted by the authors, “to know what risks to mitigate, and what the stakes are, we must know the underlying capabilities of the system”. Additionally, despite finding no evidence of strong dangerous capabilities in the models evaluated as part of their research, the authors flag early warning signs present, necessitating extreme caution as larger and larger models are developed.