Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons

Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an 11-minute read. Some of what we cover this month:

  • 📃 Moving towards a treaty on autonomous weapons.

  • 🗳️ New public opinion polling on AI regulation.

  • 🗣️ The Campaign to Ban Deepfakes continues to expand.

  • 🌐 Governance and policy updates from around the world.

Momentum on Autonomous Weapons

Momentum towards establishing an international treaty restricting autonomous weapons systems has continued to build in the months since the first-ever UN resolution on the issue was adopted by the UN General Assembly - with no signs of slowing down any time soon. Some brief updates:

  • Following the UNGA resolution’s adoption, Secretary General António Guterres will now compile a report to help states commence formal negotiations for a treaty.

    → The Secretary General is requesting multi-stakeholder input for this report, including from industry and private companies. We encourage you to contribute and share this call widely within relevant networks. To provide your input, email a Word document (we recommend a maximum of two pages) to [email protected] and [email protected] by May 25th. The Chair’s Summary from the Vienna Conference on Autonomous Weapons Systems offers a basis of what you may wish to write. If you’d like any feedback on your input, please feel free to email [email protected].

  • Last month Sierra Leone hosted the first ever ECOWAS regional conference on the issue. Held in vibrant Freetown, the conference brought together West African states and civil society leaders to share perspectives.

    → The conference produced the Freetown Communique, in which the attendees and ECOWAS leaders made explicit their support for an international treaty restricting autonomous weapons systems. We’re proud to have supported and attended this important meeting.

  • Later in April, Austria hosted the groundbreaking “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation conference at Vienna’s Hofburg Palace. 

    → 900+ international representatives from civil society, academia, diplomacy, media, and policy gathered to discuss the most pressing issues related to autonomous weapons systems, with a path towards legally binding restrictions becoming clearer and clearer amidst a palpable sense of momentum. FLI co-founder Jaan Tallinn gave a keynote speech, and joined a panel discussion. FLI Executive Director Anthony Aguirre, and Futures Program Director Emilia Javorsky also joined panel talks at the Conference.

  • FLI’s Autonomous Weapons Lead Anna Hehir recently joined for a segment on BBC Newsnight, discussing the escalation risks inherent in the proliferation of autonomous weapons systems.

  • If you’re interested in a more comprehensive, regular update on autonomous weapons systems and efforts to regulate them, be sure to subscribe to our new Autonomous Weapons Newsletter if you haven’t already. We’ve also recently launched our Autonomous Weapons Watch database, which tracks developments in autonomous weapons being developed, purchased, and/or deployed by militaries around the world.

Our Growing Campaign to Ban Deepfakes

Our multi-stakeholder Campaign to Ban Deepfakes continues to expand, with actor, activist, and author Ashley Judd, as well as Plan International, the National Organization for Women, Equality Now, and the AI Christian Partnership having now joined.

In other deepfakes-related news, the UK is now making it a criminal offense to create fake nonconsensual explicit images of someone, with jail time a possibility if the deepfake is shared. While this is a strong step in the right direction, it unfortunately is expected to be difficult to enforce. That’s why we’re urging for measures that go beyond how deepfake technology is used, to meaningfully address the issue across the entire deepfake supply chain.

FLI’s US Policy Specialist Hamza Chaudhry also wrote an op-ed for The Hill on the urgent deepfakes issue, discussing the strong potential for “global catastrophe” presented by deepfakes and deepfake technology.

Finally, we’ve put together the highlight reel below, which showcases many of the recent statements from US lawmakers about deepfakes, demonstrating the incredible (bipartisan) momentum on this issue.

For updates, relevant news, and more information about the Campaign, be sure to follow @BanDeepfakes on X and Instagram, and please share widely.

Governance and Policy Updates

  • US Senators Romney, Reed, Moran, and King recently released a bipartisan framework for federal oversight of frontier AI hardware, development and deployment to address AI’s extreme risks, especially those associated with AI’s amplification of biological, nuclear, cyber, and chemical risks. We’re thrilled to see yet another bipartisan effort here, highlighting the critical need for regulation.

  • Looking north, Canada is the most recent country to set up its own national AI safety institute, with Prime Minister Justin Trudeau announcing an initial $50 million investment in AI safety.

  • The US and UK have announced a partnership on AI safety, with plans to share capabilities and build a joint approach to safety testing to help effectively tackle risks from AI.

  • The UK and Republic of Korea have announced further details about the upcoming AI Seoul Summit. From 21-22 May, states will convene to expand on AI safety-related discussions held at the first-of-its-kind UK AI Safety Summit from this past November as they work to coordinate national AI safety approaches. Given the global nature of risks from AI, meetings like the AI Seoul Summit and the subsequent Paris AI Safety Summit are essential.

Updates from FLI

  • FLI President Max Tegmark provided the House Bipartisan AI Task Force with a civil society perspective on deepfakes and other large-scale risks associated with AI. Max noted how encouraging it was to speak to many lawmakers who were clearly dedicated to addressing these harms and risks.

  • FLI’s Hamza Chaudhry was quoted in a TIME article about the incredible resources tech companies are expending lobbying DC lawmakers to try to avoid meaningful AI regulation.

  • A report on the Foresight Institute’s FLI-sponsored Existential Hope AI Institution Design Hackathon is out now, featuring details of the nine institution prototypes designed by participants.

  • Our 2023 nuclear simulation, providing a realistic visualization of what a contemporary nuclear exchange would look like, was screened at the NukeEXPO event in Brussels.

  • The Campaign to Ban Deepfakes was referenced in a New York Times article, with AI researcher Dr. Oren Etzioni explaining why meaningful regulation is necessary to adequately address deepfakes.

  • Max Tegmark was interviewed in this new DW News documentary on AI development across the US, China, and Europe.

  • In addition to her BBC Newsnight appearance, Anna Hehir also joined Axios’ 1 big thing podcast to discuss the need for an international treaty regulating autonomous weapons systems.

  • A reminder that, as part of our work advocating for an international treaty on autonomous weapons, we’re seeking a project lead to create demonstrations of autonomous drone swarms. Less than two weeks are left to apply - submit your proposal for this project by May 12.

  • On the FLI podcast, host Gus Docker interviewed Annie Jacobsen, bestselling author and Pulitzer Prize finalist, about her new book “Nuclear War: A Scenario” and a second by second timeline of how nuclear war could happen. PauseAI’s Liron Shapira also joined Gus to discuss superintelligence goals and what differentiates AI from other technology.

What We’re Reading

  • Artists speak out against generative AI: Hundreds of singers and songwriters have signed an open letter calling upon the tech industry to “stop devaluing music” by pledging to not develop or deploy generative models which “undermine or replace” human artistry, or deny artists fair compensation.

  • OpenAI researchers fired: Two researchers at OpenAI, both of whom had at one point worked on AI safety at the company, were fired from OpenAI allegedly for leaking information. This follows months of staffing drama, including the firing - and then re-hiring - of CEO Sam Altman.

  • What we’re listening to: Following the Freetown conference, Sierra Leone’s Permanent Representative to the UN joined BBC World’s Focus on Africa to discuss West African perspectives on autonomous weapons systems, and the vulnerability Global South states when faced with the deployment of these systems by the Global North.

New Research: What Americans want from AI regulation

Nine popular approaches to regulation: New public opinion polling from the University of Maryland finds (continued) widespread support from American voters for the federal government to regulate AI development and deployment, with bipartisan support focused on the following nine regulatory approaches:

  1. Mandatory government pre-tests of new AI programs

  2. Government audits

  3. Disclosure of training data

  4. Requiring labels on deepfake content

  5. Prohibiting deepfakes in political campaign ads

  6. Prohibiting creation & sharing of nonconsensual pornographic deepfakes

  7. Establishing a federal agency on AI

  8. Creating an international ban on lethal autonomous weapons

  9. Establishing an international agency to monitor and regulate AI

Why this matters: With advancements in AI, the role of government in ensuring its safe development has been an ongoing discussion, with many well-resourced tech companies pushing back against efforts to meaningfully regulate it. This research however shows strong bipartisan support for government intervention and regulation of AI, hopefully furthering the case for legislators to take swift action to protect citizens.