Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons

Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an 11-minute read. Some of what we cover this month:

  • šŸ“ƒ Moving towards a treaty on autonomous weapons.

  • šŸ—³ļø New public opinion polling on AI regulation.

  • šŸ—£ļø The Campaign to Ban Deepfakes continues to expand.

  • šŸŒ Governance and policy updates from around the world.

Momentum on Autonomous Weapons

Momentum towards establishing an international treaty restricting autonomous weapons systems has continued to build in the months since the first-ever UN resolution on the issue was adopted by the UN General Assembly - with no signs of slowing down any time soon. Some brief updates:

  • Following the UNGA resolutionā€™s adoption, Secretary General AntĆ³nio Guterres will now compile a report to help states commence formal negotiations for a treaty.

    ā†’ The Secretary General is requesting multi-stakeholder input for this report, including from industry and private companies. We encourage you to contribute and share this call widely within relevant networks. To provide your input, email a Word document (we recommend a maximum of two pages) to [email protected] and [email protected] by May 25th. The Chairā€™s Summary from the Vienna Conference on Autonomous Weapons Systems offers a basis of what you may wish to write. If youā€™d like any feedback on your input, please feel free to email [email protected].

  • Last month Sierra Leone hosted the first ever ECOWAS regional conference on the issue. Held in vibrant Freetown, the conference brought together West African states and civil society leaders to share perspectives.

    ā†’ The conference produced the Freetown Communique, in which the attendees and ECOWAS leaders made explicit their support for an international treaty restricting autonomous weapons systems. Weā€™re proud to have supported and attended this important meeting.

  • Later in April, Austria hosted the groundbreaking ā€œHumanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulationā€ conference at Viennaā€™s Hofburg Palace. 

    ā†’ 900+ international representatives from civil society, academia, diplomacy, media, and policy gathered to discuss the most pressing issues related to autonomous weapons systems, with a path towards legally binding restrictions becoming clearer and clearer amidst a palpable sense of momentum. FLI co-founder Jaan Tallinn gave a keynote speech, and joined a panel discussion. FLI Executive Director Anthony Aguirre, and Futures Program Director Emilia Javorsky also joined panel talks at the Conference.

  • FLIā€™s Autonomous Weapons Lead Anna Hehir recently joined for a segment on BBC Newsnight, discussing the escalation risks inherent in the proliferation of autonomous weapons systems.

  • If youā€™re interested in a more comprehensive, regular update on autonomous weapons systems and efforts to regulate them, be sure to subscribe to our new Autonomous Weapons Newsletter if you havenā€™t already. Weā€™ve also recently launched our Autonomous Weapons Watch database, which tracks developments in autonomous weapons being developed, purchased, and/or deployed by militaries around the world.

Our Growing Campaign to Ban Deepfakes

Our multi-stakeholder Campaign to Ban Deepfakes continues to expand, with actor, activist, and author Ashley Judd, as well as Plan International, the National Organization for Women, Equality Now, and the AI Christian Partnership having now joined.

In other deepfakes-related news, the UK is now making it a criminal offense to create fake nonconsensual explicit images of someone, with jail time a possibility if the deepfake is shared. While this is a strong step in the right direction, it unfortunately is expected to be difficult to enforce. Thatā€™s why weā€™re urging for measures that go beyond how deepfake technology is used, to meaningfully address the issue across the entire deepfake supply chain.

FLIā€™s US Policy Specialist Hamza Chaudhry also wrote an op-ed for The Hill on the urgent deepfakes issue, discussing the strong potential for ā€œglobal catastropheā€ presented by deepfakes and deepfake technology.

Finally, weā€™ve put together the highlight reel below, which showcases many of the recent statements from US lawmakers about deepfakes, demonstrating the incredible (bipartisan) momentum on this issue.

For updates, relevant news, and more information about the Campaign, be sure to follow @BanDeepfakes on X and Instagram, and please share widely.

Governance and Policy Updates

  • US Senators Romney, Reed, Moran, and King recently released a bipartisan framework for federal oversight of frontier AI hardware, development and deployment to address AIā€™s extreme risks, especially those associated with AIā€™s amplification of biological, nuclear, cyber, and chemical risks. Weā€™re thrilled to see yet another bipartisan effort here, highlighting the critical need for regulation.

  • Looking north, Canada is the most recent country to set up its own national AI safety institute, with Prime Minister Justin Trudeau announcing an initial $50 million investment in AI safety.

  • The US and UK have announced a partnership on AI safety, with plans to share capabilities and build a joint approach to safety testing to help effectively tackle risks from AI.

  • The UK and Republic of Korea have announced further details about the upcoming AI Seoul Summit. From 21-22 May, states will convene to expand on AI safety-related discussions held at the first-of-its-kind UK AI Safety Summit from this past November as they work to coordinate national AI safety approaches. Given the global nature of risks from AI, meetings like the AI Seoul Summit and the subsequent Paris AI Safety Summit are essential.

Updates from FLI

  • FLI President Max Tegmark provided the House Bipartisan AI Task Force with a civil society perspective on deepfakes and other large-scale risks associated with AI. Max noted how encouraging it was to speak to many lawmakers who were clearly dedicated to addressing these harms and risks.

  • FLIā€™s Hamza Chaudhry was quoted in a TIME article about the incredible resources tech companies are expending lobbying DC lawmakers to try to avoid meaningful AI regulation.

  • A report on the Foresight Instituteā€™s FLI-sponsored Existential Hope AI Institution Design Hackathon is out now, featuring details of the nine institution prototypes designed by participants.

  • Our 2023 nuclear simulation, providing a realistic visualization of what a contemporary nuclear exchange would look like, was screened at the NukeEXPO event in Brussels.

  • The Campaign to Ban Deepfakes was referenced in a New York Times article, with AI researcher Dr. Oren Etzioni explaining why meaningful regulation is necessary to adequately address deepfakes.

  • Max Tegmark was interviewed in this new DW News documentary on AI development across the US, China, and Europe.

  • In addition to her BBC Newsnight appearance, Anna Hehir also joined Axiosā€™ 1 big thing podcast to discuss the need for an international treaty regulating autonomous weapons systems.

  • A reminder that, as part of our work advocating for an international treaty on autonomous weapons, weā€™re seeking a project lead to create demonstrations of autonomous drone swarms. Less than two weeks are left to apply - submit your proposal for this project by May 12.

  • On the FLI podcast, host Gus Docker interviewed Annie Jacobsen, bestselling author and Pulitzer Prize finalist, about her new book ā€œNuclear War: A Scenarioā€ and a second by second timeline of how nuclear war could happen. PauseAIā€™s Liron Shapira also joined Gus to discuss superintelligence goals and what differentiates AI from other technology.

What Weā€™re Reading

  • Artists speak out against generative AI: Hundreds of singers and songwriters have signed an open letter calling upon the tech industry to ā€œstop devaluing musicā€ by pledging to not develop or deploy generative models which ā€œundermine or replaceā€ human artistry, or deny artists fair compensation.

  • OpenAI researchers fired: Two researchers at OpenAI, both of whom had at one point worked on AI safety at the company, were fired from OpenAI allegedly for leaking information. This follows months of staffing drama, including the firing - and then re-hiring - of CEO Sam Altman.

  • What weā€™re listening to: Following the Freetown conference, Sierra Leoneā€™s Permanent Representative to the UN joined BBC Worldā€™s Focus on Africa to discuss West African perspectives on autonomous weapons systems, and the vulnerability Global South states when faced with the deployment of these systems by the Global North.

New Research: What Americans want from AI regulation

Nine popular approaches to regulation: New public opinion polling from the University of Maryland finds (continued) widespread support from American voters for the federal government to regulate AI development and deployment, with bipartisan support focused on the following nine regulatory approaches:

  1. Mandatory government pre-tests of new AI programs

  2. Government audits

  3. Disclosure of training data

  4. Requiring labels on deepfake content

  5. Prohibiting deepfakes in political campaign ads

  6. Prohibiting creation & sharing of nonconsensual pornographic deepfakes

  7. Establishing a federal agency on AI

  8. Creating an international ban on lethal autonomous weapons

  9. Establishing an international agency to monitor and regulate AI

Why this matters: With advancements in AI, the role of government in ensuring its safe development has been an ongoing discussion, with many well-resourced tech companies pushing back against efforts to meaningfully regulate it. This research however shows strong bipartisan support for government intervention and regulation of AI, hopefully furthering the case for legislators to take swift action to protect citizens.