Future of Life Institute Newsletter: Save the EU AI Act šŸ‡ŖšŸ‡ŗ

Defending the EU AI Act against Big Tech lobbying; the 2023 Future of Life Award winners; our new partnership on hardware-backed AI governance, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues toĀ subscribe?

Today's newsletter is an 11-minute read. We cover:

  • šŸ‡ŖšŸ‡ŗ Corporate lobbying attempts to weaken the EU AI Act

  • šŸ† Announcing our 2023 Future of Life Award winners

  • šŸ¤Ā Our new partnership on hardware-backed AI governance

  • šŸ“ŗ Screening ā€œArtificial Escalationā€ in Congress

#SaveTheEUAIAct

Earlier this month, EU AI Act trilogue negotiations came to a halt, with France, Germany, and Italy unexpectedly pushing to exempt powerful foundation models from regulation in the Act, shifting an immense compliance and liability burden away from those developing powerful AI systems, and onto smaller European businesses deploying them.

This position opposes that of the European Parliament; many fellow EU countries in the Council; European businesses (including theĀ 45,000 member-strong European DIGITAL SME Alliance); countless AI experts; and overĀ 80% of the European publicĀ polled.

The anti-regulation position adopted by these three countries reflectsĀ two years ofĀ intense lobbying by major American tech companies seeking to hollow out the AI Act in service of their profits and market dominance, along with German and French AI corporations wanting to similarly avoid regulation.

We urge European lawmakers to include foundation model regulation in the AI Act, prioritizing safety, European innovation, and the democratic process over corporate profits.

Below is a non-exhaustive list of resources and media coverage on this critical period of EU AI Act negotiations:

  • For a bi-weekly update on AI Act proceedings, be sure to subscribe to ourĀ EU AI Act newsletter.Ā 

  • New pollingĀ from Control AI and YouGov showing widespread support for robust AI regulation among the European public.

  • FLIā€™s Risto Uuk and Skype, Kazaa, & FLI co-founder Jaan TallinnĀ explain howĀ neglecting to regulate foundation models will hurt startups.

  • ThisĀ X threadĀ summarizes some of the messages shared by the many calling for foundation model regulation.

  • As quoted in Politico, check out ourĀ webpageĀ explaining the need for robust regulation in the Act, andĀ this table comparing key AI Act proposals (preview below):

Drum roll pleaseā€¦ our 2023 Future of Life Award Winners!

Clockwise from top left: Brandon Stoddard, Nicholas Meyer, Edward Hume, Lawrence Lasker, Walter F. Parkes.

Weā€™re thrilled to announce the recipients of ourĀ 2023 Future of Life Award!

For the past seven years, the annual Future of Life Award has celebrated under-recognized individuals whose contributions have helped make the world significantly better than it could have been.

This year, the Award honours five visionaries for their work on two highly-impactful films: Walter F. Parkes and Lawrence Lasker, screenwriters behind WarGames, and Brandon Stoddard, Edward Hume, and Nicholas Meyer, the filmmakers behind The Day After.

Both released in 1983 amidst the Cold War, these two films - and the creatives behind them - helped make the world a safer place, bringing greater awareness to the threat of nuclear war and driving preventative action from world leaders.

"These films and their creators showcase the profound role that storytellers can play in tackling some of our world's most intractable and extreme threats. They serve as a leading example of how artists can help make the world safer by examining urgent issues in compelling and evocative ways, and in turn inspire our leaders to step up and take action."

FLIā€™s Dr. Emilia Javorsky

Learn more about the 2023 Future of Life Award winners in the short video below:

AndĀ listen to Neil deGrasse Tyson discuss the two films with winners Lawrence Lasker, Nicholas Meyer, and Walter F. Parkes on his StarTalk podcast.

Our New Partnership with Mithril Security

We've partnered with Mithril SecurityĀ to explore how AI systems' transparency, traceability, and confidentiality can be enhanced through hardware-backed AI governance tooling.

Over our partnership, we'll continue to develop, evaluate, and share frameworks for hardware-backed governance, with the hope of encouraging chipmakers and policymakers alike to adopt these measures.

InĀ this video, Mithril presents the first proof-of-concept from this partnership, demonstrating confidential inference for secure, controlled AI consumption.

Discussing ā€œArtificial Escalationā€ in DC

Coinciding with a reported U.S.-China agreement to further talks about restricting the integration of AI into nuclear command, communications, and control (NC3), along with the U.S. State Departmentā€™s release of itsĀ Declaration on Responsible Military Use of AI and Autonomy, we were honoured to host Sen. Ed Markey and Rep. Ted Lieu on November 15 for a screening of our short film, ā€œArtificial Escalationā€, in the U.S. Capitol.

Following on the themes of integrating AI into NC3, as explored in the film, FLIā€™s Hamza ChaudhryĀ then moderated a discussion with Sen. Markey and Rep. Lieu about their ā€œBlock Nuclear Launch by Autonomous AIā€ bill.

Our expert panellists - FLIā€™s Dr. Emilia JavorskyĀ and nuclear security expert Carl RobichaudĀ - also joined Hamza for a broader discussion of AI-nuclear risks.

āž”ļø For more on our work in this space, visit our dedicated webpage. Be sure to also read ourĀ new report detailing how AI can increase the risks of nuclear weapons, including recommendations for U.S. lawmakers to take as first steps toward risk mitigation.

Left to right: Hamza Chaudhry, Rep. Ted Lieu, Sen. Ed Markey

Left to right: Hamza Chaudhry, Dr. Emilia Javorsky, Carl Robichaud

Updates from FLI

  • Earlier this month, we released ourĀ AI Governance Scorecard and Safety Standards Policy, evaluating numerous proposals for AI governance and proposing a framework that balances mitigating AIā€™s risks whilst reaping its benefits.

  • FLI President Max Tegmark spoke at TEDAI about AGI, and the importance of (and pathway to) keeping AI under human control.

  • Max spoke to TechCrunchĀ about the threat regulatory capture presents to the EU AI Actā€™s effectiveness.

  • Max was alsoĀ quoted in The Guardian, theĀ Washington Post, andĀ Axios, reflecting on this monthā€™s UK AI Safety Summit.

  • Executive Director Anthony Aguirre spokeĀ on a Reuters NEXT panelĀ about the risks of rapid, unregulated AI development.

  • FLIā€™s Anna Hehir spoke to WIREDĀ about the progress being made on restricting autonomous weapons systems.

  • FLIā€™s Dr. Emilia Javorsky wrote an op-edĀ in The Bulletin of the Atomic Scientists, covering the impact of the storytellers honoured with the 2023 Future of Life Award.

  • On theĀ FLI podcast, host Gus Docker spoke toĀ Dan Hendrycks from the Center for AI Safety and xAI about catastrophic AI risks. FLIā€™s ownĀ Mark Brakel, Director of Policy, joined Gus for a discussion about the UK AI Safety Summit, and the future of AI policy around the world.

New Research: Persona Modulation to Jailbreak LLMs

Manipulating an LLMā€™s ā€œpersonalityā€?:Ā FLI Buterin fellow Stephen Casper et al. have published a paper on a new automated, plain-English jailbreak attack bad actors could employ against state-of-the-art LLMs to elicit harmful text results. The authors outline how, with only plain text instructions, LLMs can be steered towards adopting personalities which will comply with instructions they are supposed to be built to reject.

Why this matters: As researcher Soroush PourĀ outlines, jailbreaks like this expose vulnerabilities in widely accessible LLMs. These vulnerabilities open up the potential for harmful misuse, ranging from LLMs providing instructions to produce illegal drugs, to potentially aiding the creation of bioweapons, and more. This highlights the need for greater investment into AI safety research, and meaningful regulation of such technology.

Postdoc interested in similar research? While our PhD fellowship applications have now closed, our postdoctoral fellowship applications are open until January 2, 2024.Ā Apply now!

What Weā€™re Reading

  • Ex-Estonian President urges, ā€œDonā€™t let AI firms put profits before peopleā€: Former Estonian President Kersti Kaljulaid wrote an op-ed in TIME urging EU lawmakers to resist Big Tech attempts to weaken the EU AI Act.

  • A serious misalignment: The Bulletin of the Atomic Scientists unpacks the concerning ability of U.S. military officers to approve AI-enabled military techā€¦ that they donā€™t necessarily trust.

  • Reaping rewards, preventing catastrophe: The Nuclear Threat Initiative has released a new report on the convergence of risks associated with AI and life sciences such as biotech, with recommendations on how to prevent a related catastrophe.

  • What weā€™re watching:Ā At TEDAI, Liv Boeree spoke about the dangers of ā€œexcessive competitionā€ in AI.

Hindsight is 20/20

"Ilyushin Il-78, Tupolev Tu-160, Micoyan&Gurevich MiG-31" by Dmitry Terekhov is licensed under CC BY-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/2.0/?ref=openverse.Ā 

This month marks 40 years since a near-catastrophic military exercise called Able Archer-83 took place.

Following the shoot-down of Korean Air flight 007, and the imminent U.S. deployment of powerful Pershing II missiles, U.S.-USSR tensions were high. So when Soviet intelligence picked up on NATO movements suggesting an imminent nuclear strike, their forces were placed on high alert - including setting a 30-minute response time for a full nuclear launch.

However, Soviet military leaders were unaware that the suspicious movements were only the result of a NATO exercise, not actual pre-attack preparations. Thankfully, the USSR lowered alert levels after the four-day exercise ended, and despite the severe potential for misunderstanding, catastrophe was avoided.

But imagine what could have happened if an escalatory response had spiraled further, bringing NATO forces or the USSR closer to the brink of a launch?

Or if RYaN, the USSRā€™s intelligence analysis software, were integrated with nuclear command, control, and communications (NC3) and empowered to autonomously decide to launch a strike based on its own analysis of the NATO movements?

As we explore every month, this near-disaster is just one of many similar examples from which we must learn - and with particular relevance to our advocacy against integrating AI into NC3.