Future of Life Institute Newsletter: Wrapping Up Our Biggest Year Yet

A provisional agreement is reached on the EU AI Act, highlights from the past year, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter, our last of 2023, is a quick seven-minute read. We cover:

  • 🎉 The provisional agreement on the EU AI Act.

  • Reflecting on the pivotal year we’ve had.

  • 🤳 New opportunities, updates, & content for your consideration over the holiday period.

Happy holidays!

Deal Reached on the EU AI Act

After deadlock in EU AI Act trilogue negotiations threatened to undermine years of hard work on the legislation, a political agreement reached on December 8th has restored AI safety advocates’ faith in the Act’s comprehensiveness.

As you may recall: in November, negotiations came to a halt due to French, German, and Italian opposition to including foundation models in the regulation - endangering not just the effectiveness of the Act, but the possibility of lawmakers reaching an agreement before the deadline earlier this month.

However, following pushback from AI experts, academics, civil society organizations, and many others, a provisional deal - including key foundation model regulation - was ultimately agreed upon.

This pivotal deal is a major step forward, both for tangibly creating a safer environment for AI innovation in the EU, and for its powerful influence in shaping how other governments around the world will regulate AI.

During this critical period of negotiations, FLI’s EU Research Lead Risto Uuk and Skype, Kazaa & FLI co-founder Jaan Tallinn published an op-ed explaining how neglecting to regulate foundation models will hurt startups; Risto also had a commentary on the EU AI Act published in Barron’s.

As quoted in Politico, we also published a webpage explaining the need for robust regulation in the Act, as well as this table comparing key AI Act proposals. This X thread summarizes some of the other coverage from this time.

For updates on the AI Act as its technical details are finalized and voted on over the coming months, be sure to visit our dedicated website and subscribe to our bi-weekly EU AI Act newsletter.

FLI’s 2023 Wrapped

In many ways, 2023 has been the biggest year yet for the Future of Life Institute, and for many of the areas in which we work. Needless to say, we’ve been busy. Below, we’ve listed ten things we’re most proud of from the past year:

  1. Our six-month pause open letter. In March, we released a letter calling for a six-month pause on the training of AI models more advanced than ChatGPT-4. Signed by 30,000+ experts, researchers, leaders, and more, the letter sparked meaningful discussion about AI safety around the world.

  2. Participating in key multi-stakeholder events on AI safety. We were honoured to engage with important meetings on AI safety this year such as the UK AI Safety Summit, and to have FLI President Max Tegmark address U.S. Congress at the AI Insight Forum.

  3. First-ever UN resolution on autonomous weapons adopted. An issue that we’ve been calling attention to for years alongside many like-minded organizations, the adoption of this resolution was a critical step towards restricting these weapons systems.

  4. Artificial Escalation” and our work advocating against the integration of AI and nuclear command, control, and communications (NC3). The release of our short film “Artificial Escalation” complemented our ongoing work here - and was even followed up with a screening and panel discussion in Congress.

  1. FLI co-founder Jaan Tallinn’s appointment to the UN High-level Advisory Body on AI.

  2. Our numerous new partnerships. From our open letter with Encode Justice calling for concrete U.S. policy interventions against current and future AI harms, to our partnership with Mithril Security exploring hardware-backed AI governance, we’re proud to have worked with these organizations towards our shared goals of flourishing futures for all.

  3. FLI President Max Tegmark’s talk at TEDAI on keeping AI under control.

  4. Celebrating unsung heroes. With our 2023 Future of Life Award, we were thrilled to shine light on the work of the visionaries behind the highly-impactful films WarGames and The Day After.

  5. Our most scientifically realistic nuclear war simulation yet. The release of our short film demonstrating what would happen in a U.S.-Russia nuclear exchange brought attention to the potential for global catastrophe, shortly before the timely release of Oppenheimer.

  6. The important work resulting from our grantmaking program. From projects to PhDs and postdoctoral research, we couldn’t be more proud of the work our grantmaking has supported this past year.

Exciting Opportunities

FLI Buterin Postdoctoral Fellowships: Our postdoctoral fellowship offers three years of funding and professional development for promising researchers interested in pursuing AI existential safety research. There are no geographic limitations, and we welcome applicants from a diverse range of backgrounds. Apply here by January 2nd at 11:59pm ET.

Foresight Institute Hackathon: From February 5-6, 2024, the Foresight Institute will host an AI Institution Design Hackathon in San Francisco. Open to “leading researchers, funders, and builders in AI, social sciences, economics, mechanism design, game theory, systems thinking, and other relevant areas”, this Hackathon invites participants to prototype solutions to AI challenges, in pursuit of a positive future with AI. Apply here to attend or become a sponsor.

Updates from FLI

  • FLI’s Dr. Emilia Javorsky spoke on a panel about AI risks at the Evident AI Symposium.

  • Executive Director Anthony Aguirre spoke at Semafor’s Finding Common Ground on AI event.

  • Director of Policy Mark Brakel attended the first-ever Indo-Pacific region state conference on autonomous weapons.

What We’re Reading

  • The EU AI Act effect: As reported in Politico, new polling from the AI Policy Institute found that a vast majority of Americans polled support the EU AI Act, and want similar legislation in the US. They also found that a majority support holding companies liable when their image-generating models are used to create harmful non-consensual deepfakes of real people.

  • A dire warning: International security expert Michael Klare wrote in The Nation about the catastrophic risks of AI-enabled weaponry, specifically referring to the US Department of Defense’s “Replicator” program.

  • The Vatican on AI: Ahead of the World Day of Peace, Pope Francis released a lengthy statement on balancing the risks and rewards of AI, and urged for a binding international treaty on its development and use.

  • What we’re watching: Control AI have released a new video highlighting the increasingly-present threat of non-consensual deepfakes - and the need for urgent action to stop them.