- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons
Future of Life Institute Newsletter: Building Momentum on Autonomous Weapons
Summarizing recent updates on the push for autonomous weapons regulation, new polling on AI regulation, progress on banning deepfakes, policy updates from around the world, and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an 11-minute read. Some of what we cover this month:
š Moving towards a treaty on autonomous weapons.
š³ļø New public opinion polling on AI regulation.
š£ļø The Campaign to Ban Deepfakes continues to expand.
š Governance and policy updates from around the world.
Momentum on Autonomous Weapons
Momentum towards establishing an international treaty restricting autonomous weapons systems has continued to build in the months since the first-ever UN resolution on the issue was adopted by the UN General Assembly - with no signs of slowing down any time soon. Some brief updates:
Following the UNGA resolutionās adoption, Secretary General AntĆ³nio Guterres will now compile a report to help states commence formal negotiations for a treaty.
ā The Secretary General is requesting multi-stakeholder input for this report, including from industry and private companies. We encourage you to contribute and share this call widely within relevant networks. To provide your input, email a Word document (we recommend a maximum of two pages) to [email protected] and [email protected] by May 25th. The Chairās Summary from the Vienna Conference on Autonomous Weapons Systems offers a basis of what you may wish to write. If youād like any feedback on your input, please feel free to email [email protected].
Last month Sierra Leone hosted the first ever ECOWAS regional conference on the issue. Held in vibrant Freetown, the conference brought together West African states and civil society leaders to share perspectives.
ā The conference produced the Freetown Communique, in which the attendees and ECOWAS leaders made explicit their support for an international treaty restricting autonomous weapons systems. Weāre proud to have supported and attended this important meeting.
Later in April, Austria hosted the groundbreaking āHumanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulationā conference at Viennaās Hofburg Palace.
ā 900+ international representatives from civil society, academia, diplomacy, media, and policy gathered to discuss the most pressing issues related to autonomous weapons systems, with a path towards legally binding restrictions becoming clearer and clearer amidst a palpable sense of momentum. FLI co-founder Jaan Tallinn gave a keynote speech, and joined a panel discussion. FLI Executive Director Anthony Aguirre, and Futures Program Director Emilia Javorsky also joined panel talks at the Conference.
FLIās Autonomous Weapons Lead Anna Hehir recently joined for a segment on BBC Newsnight, discussing the escalation risks inherent in the proliferation of autonomous weapons systems.
š„ šŗ FLI's Autonomous Weapons Systems Program Lead Anna Hehir joined @BBCNewsnight for a segment on autonomous weapons systems, which also made reference to our "Slaughterbots" short film.
Watch the segment below - hear Anna from the 1:41 mark:
ā Future of Life Institute (@FLI_org)
1:54 PM ā¢ Apr 24, 2024
If youāre interested in a more comprehensive, regular update on autonomous weapons systems and efforts to regulate them, be sure to subscribe to our new Autonomous Weapons Newsletter if you havenāt already. Weāve also recently launched our Autonomous Weapons Watch database, which tracks developments in autonomous weapons being developed, purchased, and/or deployed by militaries around the world.
Our Growing Campaign to Ban Deepfakes
Our multi-stakeholder Campaign to Ban Deepfakes continues to expand, with actor, activist, and author Ashley Judd, as well as Plan International, the National Organization for Women, Equality Now, and the AI Christian Partnership having now joined.
In other deepfakes-related news, the UK is now making it a criminal offense to create fake nonconsensual explicit images of someone, with jail time a possibility if the deepfake is shared. While this is a strong step in the right direction, it unfortunately is expected to be difficult to enforce. Thatās why weāre urging for measures that go beyond how deepfake technology is used, to meaningfully address the issue across the entire deepfake supply chain.
FLIās US Policy Specialist Hamza Chaudhry also wrote an op-ed for The Hill on the urgent deepfakes issue, discussing the strong potential for āglobal catastropheā presented by deepfakes and deepfake technology.
Finally, weāve put together the highlight reel below, which showcases many of the recent statements from US lawmakers about deepfakes, demonstrating the incredible (bipartisan) momentum on this issue.
"When the American people can no longer recognize fact from fiction, it will be impossible to have a democracy."
-@SenBlumenthalAcross party lines, US lawmakers are clearly eager to combat the rampant deepfakes issue and deepfake-powered sexual abuse, fraud & disinformation. š
ā Future of Life Institute (@FLI_org)
5:29 PM ā¢ Apr 30, 2024
For updates, relevant news, and more information about the Campaign, be sure to follow @BanDeepfakes on X and Instagram, and please share widely.
Governance and Policy Updates
US Senators Romney, Reed, Moran, and King recently released a bipartisan framework for federal oversight of frontier AI hardware, development and deployment to address AIās extreme risks, especially those associated with AIās amplification of biological, nuclear, cyber, and chemical risks. Weāre thrilled to see yet another bipartisan effort here, highlighting the critical need for regulation.
Looking north, Canada is the most recent country to set up its own national AI safety institute, with Prime Minister Justin Trudeau announcing an initial $50 million investment in AI safety.
The US and UK have announced a partnership on AI safety, with plans to share capabilities and build a joint approach to safety testing to help effectively tackle risks from AI.
The UK and Republic of Korea have announced further details about the upcoming AI Seoul Summit. From 21-22 May, states will convene to expand on AI safety-related discussions held at the first-of-its-kind UK AI Safety Summit from this past November as they work to coordinate national AI safety approaches. Given the global nature of risks from AI, meetings like the AI Seoul Summit and the subsequent Paris AI Safety Summit are essential.
Updates from FLI
FLI President Max Tegmark provided the House Bipartisan AI Task Force with a civil society perspective on deepfakes and other large-scale risks associated with AI. Max noted how encouraging it was to speak to many lawmakers who were clearly dedicated to addressing these harms and risks.
FLIās Hamza Chaudhry was quoted in a TIME article about the incredible resources tech companies are expending lobbying DC lawmakers to try to avoid meaningful AI regulation.
A report on the Foresight Instituteās FLI-sponsored Existential Hope AI Institution Design Hackathon is out now, featuring details of the nine institution prototypes designed by participants.
Our 2023 nuclear simulation, providing a realistic visualization of what a contemporary nuclear exchange would look like, was screened at the NukeEXPO event in Brussels.
The Campaign to Ban Deepfakes was referenced in a New York Times article, with AI researcher Dr. Oren Etzioni explaining why meaningful regulation is necessary to adequately address deepfakes.
Max Tegmark was interviewed in this new DW News documentary on AI development across the US, China, and Europe.
In addition to her BBC Newsnight appearance, Anna Hehir also joined Axiosā 1 big thing podcast to discuss the need for an international treaty regulating autonomous weapons systems.
A reminder that, as part of our work advocating for an international treaty on autonomous weapons, weāre seeking a project lead to create demonstrations of autonomous drone swarms. Less than two weeks are left to apply - submit your proposal for this project by May 12.
On the FLI podcast, host Gus Docker interviewed Annie Jacobsen, bestselling author and Pulitzer Prize finalist, about her new book āNuclear War: A Scenarioā and a second by second timeline of how nuclear war could happen. PauseAIās Liron Shapira also joined Gus to discuss superintelligence goals and what differentiates AI from other technology.
What Weāre Reading
Artists speak out against generative AI: Hundreds of singers and songwriters have signed an open letter calling upon the tech industry to āstop devaluing musicā by pledging to not develop or deploy generative models which āundermine or replaceā human artistry, or deny artists fair compensation.
OpenAI researchers fired: Two researchers at OpenAI, both of whom had at one point worked on AI safety at the company, were fired from OpenAI allegedly for leaking information. This follows months of staffing drama, including the firing - and then re-hiring - of CEO Sam Altman.
What weāre listening to: Following the Freetown conference, Sierra Leoneās Permanent Representative to the UN joined BBC Worldās Focus on Africa to discuss West African perspectives on autonomous weapons systems, and the vulnerability Global South states when faced with the deployment of these systems by the Global North.
Nine popular approaches to regulation: New public opinion polling from the University of Maryland finds (continued) widespread support from American voters for the federal government to regulate AI development and deployment, with bipartisan support focused on the following nine regulatory approaches:
Mandatory government pre-tests of new AI programs
Government audits
Disclosure of training data
Requiring labels on deepfake content
Prohibiting deepfakes in political campaign ads
Prohibiting creation & sharing of nonconsensual pornographic deepfakes
Establishing a federal agency on AI
Creating an international ban on lethal autonomous weapons
Establishing an international agency to monitor and regulate AI
Why this matters: With advancements in AI, the role of government in ensuring its safe development has been an ongoing discussion, with many well-resourced tech companies pushing back against efforts to meaningfully regulate it. This research however shows strong bipartisan support for government intervention and regulation of AI, hopefully furthering the case for legislators to take swift action to protect citizens.