Future of Life Institute Newsletter: Everyone's (Finally) Talking About AI Safety

Updates on AI regulation, a historic UN resolution, our new open letter on AI licensing, examining the Terms of Use of leading AI labs, and more. Also, just two weeks left to apply for FLI PhD fellowships!

Welcome to the Future of Life Institute newsletter. Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a 9-minute read. We cover:

  • šŸš¦ Updates from the AI regulation space

  • šŸŒ A win for autonomous weapons regulation at the UNGA

  • šŸ“ Our joint open letter with Encode Justice

  • šŸ« Two weeks left to apply for our Buterin PhD fellowship!

A Big Month for AI Regulation

An ever-growing number of voices from civil society, academics, tech experts, and more are calling for governments to step in and regulate AI development, with increasing urgency.

The past month saw numerous developments, both in advocacy efforts and promising steps towards AI regulation:

  • UN Secretary-General AntĆ³nio Guterres launched the UNā€™s multi-stakeholder High-level Advisory Body on AI, composed of 39 experts from around the world.

    • Jaan Tallinn, FLI board member and co-founder of FLI, Skype, and Kazaa, has been appointed as a member.

  • In a major first step towards regulation, U.S. President Joe Biden issued an executive order on AI, directing actions intended to mitigate present and future harms from AI.

  • On the same day, the G7 released its international guiding principles on AI, along with a voluntary code of conduct for AI developers.

  • U.S. Senate Majority Leader Chuck Schumer has convened two more AI Insight Forums. At the October 24 forum, FLI President Max Tegmark spoke about the opportunity for AI innovation and AI safety to coexist, but only through government regulation and oversight.

  • Several campaigns generated discussion in advance of this weekā€™s UK AI Safety Summit.

UN Makes History on Autonomous Weapons

Earlier this month at the UN General Assembly, Austria tabled the first ever UN resolution on autonomous weapons systems (AWS). This resolution draws attention to the many moral, ethical, and security-related concerns associated with AWS, and establishes international support for the creation of a legally binding instrument. With over 100 countries already supporting it, a vote by the UNGA First Committee to adopt the resolution is expected this week, followed by a plenary session vote in December.

The resolution follows a (rare) joint call from UN Secretary-General AntĆ³nio Guterres and International Committee of the Red Cross President Mirjana Spoljaric, urging states to establish new international rules on such systems ā€œto protect humanityā€.

For more on autonomous weapons systems, see our dedicated website on the topic.

Encode Justice X Future of Life Institute

Partnering with Encode Justice, weā€™ve released an open letter calling on American lawmakers to address both present harms and emerging threats from AI by implementing a tiered federal licensing regime, similar to what Senator Blumenthal and Senator Hawley have proposed.

We also jointly recommend the following:

  1. The creation of a federal oversight body to administer this licensing regime.

  2. U.S. leadership in intergovernmental standard-setting discussions, facilitating global buy-in.

  3. Centering input from civil society, academia, and the public in AI policymaking.

The full statement:

Apply Soon for our Vitalik Buterin PhD and Postdoctoral Fellowships!

Applications are closing soon for our PhD fellowships focused on AI existential safety research; our postdoctoral fellowship applications close January 2. Fellowships are global and open to all regardless of nationality or background; we are seeking a diverse applicant pool. All Fellows will receive applicable tuition and fees, as well as a stipend and research/travel fund. Please share this excellent opportunity widely.

Current or future PhD student intending to research AI existential safety? The deadline for PhD fellowship applications is November 16, 2023 at 11:59 pm ET.

Current or future postdoc working on AI existential safety research? The deadline for postdoctoral fellowship applications is January 2, 2024 at 11:59 pm ET.

Updates from FLI

  • As October was Cybersecurity Awareness Month, FLIā€™s Hamza Chaudhry & Landon Klein published a review of the cybersecurity risks presented by AI, along with several policy recommendations for U.S. lawmakers.

  • FLI Executive Director Anthony Aguirre published a new paper arguing why we should choose not to develop ā€œsuperhumanā€ general-purpose AI systems. Find his thread on it below:

  • FLI President Max Tegmark was named to Insiderā€™s AI 100 2023 list.

  • Max was quoted in The Guardian, speaking about the existential risk posed by the current AI ā€œrace to the bottomā€.

  • The final three episodes of our ā€˜Imagine A Worldā€™ podcast are out now, available on Youtube or your favourite podcast player.

  • FLIā€™s Dr. Emilia Javorsky was interviewed on the Existential Hope podcast about the future of AI, and its intersections with bioengineering.

  • On the FLI podcast, host Gus Docker spoke to computer scientist and physicist Steve Omohundro about his new paper with FLI President Max Tegmark about building provably safe AGI. Foundation for American Innovation senior economist Samuel Hammond also joined Gus for a conversation about AGIā€™s institution-disrupting potential.

New Research: Mapping Terms of Use Conditions

Can we rely on information-sharing?: To evaluate the liability of third-party companies deploying general-purpose AI systems in case of harm, FLIā€™s EU Policy Fellow Alexandra Tsalidis examined the Terms of Use of five major general-purpose AI developers. She found that they ā€œfail to provide downstream deployers with any legally enforceable assurances about the quality, reliability, and accuracy of their products or servicesā€.

Why this matters: Third-party companies could potentially be held liable for harm caused by general-purpose AI systems they are deploying, without knowing it. In the context of the EU AI Act, this necessitates strong obligations for the developers of these systems - not just those deploying them.

What Weā€™re Reading

  • AI pioneer Yoshua Bengio wrote in Macleanā€™s of the worrying threat posed by autonomous weapons systems and other forms of AI in military scenarios.

  • The Maniac, a new book from Benjamin Labatut, explores the legacy of physicist John von Neumann who helped create both the atomic bomb and AI.

  • Also from Bengio, and Daniel Privitera: a roadmap for AI progress, without sacrificing safety or democracy.

  • Conjecture CEO Connor Leahy participated in (and won) a Cambridge Union debate, arguing that AI poses an existential threat:

Hindsight is 20/20

 

On October 27th, 1962, Vasili Arkhipov prevented nuclear war.

Serving on a USSR submarine near Cuba, during the height of the Cuban missile crisis, Arkhipov vetoed the submarine captainā€™s decision to launch a nuclear torpedo against an American destroyer they perceived to be initiating an attack against them.

Unbeknownst to the Soviet forces onboard the submarine, the detonations happening underwater were non-lethal depth charges intended to force the submarine to rise. If the torpedo had been launched, a catastrophic nuclear war almost certainly would have followed.

For his bravery, Arkhipov was posthumously honoured in 2017 with the first Future of Life Award.

Does this story sound familiar? This terrifying near-disaster is just one of many similar examples.