Future of Life Institute Newsletter: Save the EU AI Act 🇪🇺

Defending the EU AI Act against Big Tech lobbying; the 2023 Future of Life Award winners; our new partnership on hardware-backed AI governance, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an 11-minute read. We cover:

  • 🇪🇺 Corporate lobbying attempts to weaken the EU AI Act

  • 🏆 Announcing our 2023 Future of Life Award winners

  • 🤝 Our new partnership on hardware-backed AI governance

  • đź“ş Screening “Artificial Escalation” in Congress

#SaveTheEUAIAct

Earlier this month, EU AI Act trilogue negotiations came to a halt, with France, Germany, and Italy unexpectedly pushing to exempt powerful foundation models from regulation in the Act, shifting an immense compliance and liability burden away from those developing powerful AI systems, and onto smaller European businesses deploying them.

This position opposes that of the European Parliament; many fellow EU countries in the Council; European businesses (including the 45,000 member-strong European DIGITAL SME Alliance); countless AI experts; and over 80% of the European public polled.

The anti-regulation position adopted by these three countries reflects two years of intense lobbying by major American tech companies seeking to hollow out the AI Act in service of their profits and market dominance, along with German and French AI corporations wanting to similarly avoid regulation.

We urge European lawmakers to include foundation model regulation in the AI Act, prioritizing safety, European innovation, and the democratic process over corporate profits.

Below is a non-exhaustive list of resources and media coverage on this critical period of EU AI Act negotiations:

  • For a bi-weekly update on AI Act proceedings, be sure to subscribe to our EU AI Act newsletter. 

  • New polling from Control AI and YouGov showing widespread support for robust AI regulation among the European public.

  • FLI’s Risto Uuk and Skype, Kazaa, & FLI co-founder Jaan Tallinn explain how neglecting to regulate foundation models will hurt startups.

  • This X thread summarizes some of the messages shared by the many calling for foundation model regulation.

  • As quoted in Politico, check out our webpage explaining the need for robust regulation in the Act, and this table comparing key AI Act proposals (preview below):

Drum roll please… our 2023 Future of Life Award Winners!

Clockwise from top left: Brandon Stoddard, Nicholas Meyer, Edward Hume, Lawrence Lasker, Walter F. Parkes.

We’re thrilled to announce the recipients of our 2023 Future of Life Award!

For the past seven years, the annual Future of Life Award has celebrated under-recognized individuals whose contributions have helped make the world significantly better than it could have been.

This year, the Award honours five visionaries for their work on two highly-impactful films: Walter F. Parkes and Lawrence Lasker, screenwriters behind WarGames, and Brandon Stoddard, Edward Hume, and Nicholas Meyer, the filmmakers behind The Day After.

Both released in 1983 amidst the Cold War, these two films - and the creatives behind them - helped make the world a safer place, bringing greater awareness to the threat of nuclear war and driving preventative action from world leaders.

"These films and their creators showcase the profound role that storytellers can play in tackling some of our world's most intractable and extreme threats. They serve as a leading example of how artists can help make the world safer by examining urgent issues in compelling and evocative ways, and in turn inspire our leaders to step up and take action."

FLI’s Dr. Emilia Javorsky

Learn more about the 2023 Future of Life Award winners in the short video below:

And listen to Neil deGrasse Tyson discuss the two films with winners Lawrence Lasker, Nicholas Meyer, and Walter F. Parkes on his StarTalk podcast.

Our New Partnership with Mithril Security

We've partnered with Mithril Security to explore how AI systems' transparency, traceability, and confidentiality can be enhanced through hardware-backed AI governance tooling.

Over our partnership, we'll continue to develop, evaluate, and share frameworks for hardware-backed governance, with the hope of encouraging chipmakers and policymakers alike to adopt these measures.

In this video, Mithril presents the first proof-of-concept from this partnership, demonstrating confidential inference for secure, controlled AI consumption.

Discussing “Artificial Escalation” in DC

Coinciding with a reported U.S.-China agreement to further talks about restricting the integration of AI into nuclear command, communications, and control (NC3), along with the U.S. State Department’s release of its Declaration on Responsible Military Use of AI and Autonomy, we were honoured to host Sen. Ed Markey and Rep. Ted Lieu on November 15 for a screening of our short film, “Artificial Escalation”, in the U.S. Capitol.

Following on the themes of integrating AI into NC3, as explored in the film, FLI’s Hamza Chaudhry then moderated a discussion with Sen. Markey and Rep. Lieu about their “Block Nuclear Launch by Autonomous AI” bill.

Our expert panellists - FLI’s Dr. Emilia Javorsky and nuclear security expert Carl Robichaud - also joined Hamza for a broader discussion of AI-nuclear risks.

➡️ For more on our work in this space, visit our dedicated webpage. Be sure to also read our new report detailing how AI can increase the risks of nuclear weapons, including recommendations for U.S. lawmakers to take as first steps toward risk mitigation.

Left to right: Hamza Chaudhry, Rep. Ted Lieu, Sen. Ed Markey

Left to right: Hamza Chaudhry, Dr. Emilia Javorsky, Carl Robichaud

Updates from FLI

  • Earlier this month, we released our AI Governance Scorecard and Safety Standards Policy, evaluating numerous proposals for AI governance and proposing a framework that balances mitigating AI’s risks whilst reaping its benefits.

  • FLI President Max Tegmark spoke at TEDAI about AGI, and the importance of (and pathway to) keeping AI under human control.

  • Max spoke to TechCrunch about the threat regulatory capture presents to the EU AI Act’s effectiveness.

  • Max was also quoted in The Guardian, the Washington Post, and Axios, reflecting on this month’s UK AI Safety Summit.

  • Executive Director Anthony Aguirre spoke on a Reuters NEXT panel about the risks of rapid, unregulated AI development.

  • FLI’s Anna Hehir spoke to WIRED about the progress being made on restricting autonomous weapons systems.

  • FLI’s Dr. Emilia Javorsky wrote an op-ed in The Bulletin of the Atomic Scientists, covering the impact of the storytellers honoured with the 2023 Future of Life Award.

  • On the FLI podcast, host Gus Docker spoke to Dan Hendrycks from the Center for AI Safety and xAI about catastrophic AI risks. FLI’s own Mark Brakel, Director of Policy, joined Gus for a discussion about the UK AI Safety Summit, and the future of AI policy around the world.

New Research: Persona Modulation to Jailbreak LLMs

Manipulating an LLM’s “personality”?: FLI Buterin fellow Stephen Casper et al. have published a paper on a new automated, plain-English jailbreak attack bad actors could employ against state-of-the-art LLMs to elicit harmful text results. The authors outline how, with only plain text instructions, LLMs can be steered towards adopting personalities which will comply with instructions they are supposed to be built to reject.

Why this matters: As researcher Soroush Pour outlines, jailbreaks like this expose vulnerabilities in widely accessible LLMs. These vulnerabilities open up the potential for harmful misuse, ranging from LLMs providing instructions to produce illegal drugs, to potentially aiding the creation of bioweapons, and more. This highlights the need for greater investment into AI safety research, and meaningful regulation of such technology.

Postdoc interested in similar research? While our PhD fellowship applications have now closed, our postdoctoral fellowship applications are open until January 2, 2024. Apply now!

What We’re Reading

  • Ex-Estonian President urges, “Don’t let AI firms put profits before people”: Former Estonian President Kersti Kaljulaid wrote an op-ed in TIME urging EU lawmakers to resist Big Tech attempts to weaken the EU AI Act.

  • A serious misalignment: The Bulletin of the Atomic Scientists unpacks the concerning ability of U.S. military officers to approve AI-enabled military tech… that they don’t necessarily trust.

  • Reaping rewards, preventing catastrophe: The Nuclear Threat Initiative has released a new report on the convergence of risks associated with AI and life sciences such as biotech, with recommendations on how to prevent a related catastrophe.

  • What we’re watching: At TEDAI, Liv Boeree spoke about the dangers of “excessive competition” in AI.

Hindsight is 20/20

"Ilyushin Il-78, Tupolev Tu-160, Micoyan&Gurevich MiG-31" by Dmitry Terekhov is licensed under CC BY-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/2.0/?ref=openverse. 

This month marks 40 years since a near-catastrophic military exercise called Able Archer-83 took place.

Following the shoot-down of Korean Air flight 007, and the imminent U.S. deployment of powerful Pershing II missiles, U.S.-USSR tensions were high. So when Soviet intelligence picked up on NATO movements suggesting an imminent nuclear strike, their forces were placed on high alert - including setting a 30-minute response time for a full nuclear launch.

However, Soviet military leaders were unaware that the suspicious movements were only the result of a NATO exercise, not actual pre-attack preparations. Thankfully, the USSR lowered alert levels after the four-day exercise ended, and despite the severe potential for misunderstanding, catastrophe was avoided.

But imagine what could have happened if an escalatory response had spiraled further, bringing NATO forces or the USSR closer to the brink of a launch?

Or if RYaN, the USSR’s intelligence analysis software, were integrated with nuclear command, control, and communications (NC3) and empowered to autonomously decide to launch a strike based on its own analysis of the NATO movements?

As we explore every month, this near-disaster is just one of many similar examples from which we must learn - and with particular relevance to our advocacy against integrating AI into NC3.