Future of Life Institute Newsletter: Hollywood Talks AI

Including our latest video on AI + nukes, and FLI cause areas on the big screen.

Welcome to the Future of Life Institute newsletter. Every month, we bring 29,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe here?

Today's newsletter is a 10-minute read. We cover:

  • Our new short film, “Artificial Escalation

  • Actors and writers pushing back against job automation

  • FLI cause areas in recent media

  • The UN Security Council and the US Senate discuss AI

New Newsletter Platform

We have just switched to a new platform for delivering our newsletters. We hope this will improve your reading experience!

Out Now: “Artificial Escalation”, FLI’s Newest Short Film

“Imagine it’s 2032. The US and China are still rivals. In order to give their military commanders better intel and more time to make decisions, both powers have integrated artificial intelligence (AI) throughout their nuclear command, control, and communications (NC3) systems. But instead, events take an unexpected turn and spin out of control, with catastrophic results.”

FLI’s Anthony Aguirre, Emilia Javorsky, and Max Tegmark in The Bulletin

Our new short film, “Artificial Escalation”, explores this very real possibility, demonstrating a global catastrophe driven by the following factors:

  • Accidental conflict escalation at machine speeds;

  • Al integrated too deeply into high-stakes functions;

  • Humans giving away too much control to Al;

  • Humans unable to tell what is real and what is fake, and;

  • An arms race that ultimately has only losers.

But this doesn’t have to be our fate. If you haven’t seen it already, be sure to watch below and share. You can find more information on our work to keep AI out of NC3 systems at the link here.

Entertainment Industry Workers Fight Replacement by AI

Three months into the Writers Guild of America (WGA) strike, the Screen Actors Guild and American Federation of Television and Radio Artists (SAG-AFTRA) have announced their own strike, effectively shutting down most film and TV production in Hollywood.

Along with advocating for better compensation, WGA and SAG-AFTRA are asking for their livelihoods to be protected from AI advancements. Specifically:

  • Writers fear that AI will replace them, or at least lead to further pay cuts and far fewer jobs, as studios turn to programs akin to ChatGPT to generate screenplays, propose ideas for new shows, etc.

  • Actors fear the proliferation of “digital doubles”, wherein their likenesses could be simulated in creative projects without their participation or consent. Readers may recall Black Mirror’s recent “Joan is Awful” episode as a creative depiction of this.

Meanwhile, the Authors Guild expressed similar concerns with their recent Open Letter to Generative AI Leaders. 10,000+ authors signed, including the likes of Margaret Atwood, Jonathan Franzen, and Celeste Ng, calling for AI companies to “obtain consent, credit, and fairly compensate writers for the use of copyrighted materials in training AI”.

“Oppenheimer”; “Mission: Impossible 7”; “Killer Robots”: FLI Cause Areas Hit Screens

Nuclear weapons, artificial intelligence, and autonomous weapons seem to be getting more screen time than ever before.

  • Most notably, Oppenheimer premiered in cinemas, recounting the life of the so-called “father of the atomic bomb”. It couldn’t be more relevant than it is now. The Bulletin of the Atomic Scientists warns of the great potential for imminent global human-made catastrophe, such as a nuclear exchange. And as director Christopher Nolan pointed out on a recent panel, we’re in an “Oppenheimer moment” for AI, in the risks it presents on its own, and the danger from its potential integration into NC3.

  • The newest Mission: Impossible film focuses on the existential threat presented by a mysterious, artificially-intelligent cyber villain; its “most topical villain yet” according to one columnist. Although fictional and not representative of current AI systems, it’s a relevant cautionary tale for our time.

  • On Netflix, the Unknown: Killer Robots documentary (featuring FLI’s Dr. Emilia Javorsky) presented a terrifying reality with autonomous weapons. One reviewer wrote after seeing it, “the future of AI will fill you with unholy terror”.

Governance and Policy Updates

AI policy:

  • The U.S. Senate held a hearing this week on “Oversight of AI: Principles for Regulation”, with testimony from AI experts Stuart Russell, Yoshua Bengio, and Anthropic CEO Dario Amodei. The three outlined concerns about AI risks and our ability to mitigate them, as companies continue to develop AI tech at an unforeseen, "worrisome" speed. Read our statement on it here.

  • The United Nations Security Council held its first meeting on AI on 18 July, with Secretary General António Guterres urging for “transparency, accountability, and oversight”.

Autonomous weapons:

  • In the same UN Security Council meeting, Sec. Gen. Guterres called for a legally binding instrument on autonomous weapons to be negotiated by 2026. He also spoke of the need to maintain human agency and control over nuclear weapons.

Climate change:

  • The G20 environment meeting in Chennai, India ended on Friday with no agreement on climate policy - a disappointing result following a month of intense, record-breaking heat.

Updates from FLI

  • AI pioneers Stuart Russell and Geoffrey Hinton, with FLI’s EU Research Lead Risto Uuk, met with European Commission EVP Margrethe Vestager to discuss AI safety.

  • Earlier this month, we announced our first nuclear war research grants. This grant round supports 10 projects examining the humanitarian impacts of nuclear war. More information on our 2023 grants can be found here.

  • On the FLI podcast, guest host Nathan Labenz spoke to Skype and FLI co-founder Jaan Tallinn about his views on AI development.

  • Also on the FLI podcast, host Gus Docker spoke to writer and Roots of Progress founder Jason Crawford about Jason’s philosophy of progress, looking back into history and projecting into the future.

New Research: Bypassing AI Safety Measures?

Automated jailbreaking: New research from Carnegie Mellon University and the Center for AI Safety exposes the vulnerability of large language models (LLMs) to adversarial attacks that can bypass their safety measures, with no known way to prevent such attacks.

Why this matters: The ability to avoid safety guardrails in widely-available LLMs like ChatGPT leaves the door open for anyone to misuse them, for example by having an LLM generate the harmful content it’s supposed to be aligned against distributing. It also ties in with the ongoing open source vs. closed-source debate in the AI community, as the attacks studied by the researchers seem to impact both kinds of LLM.

What We’re Reading

  • “AI Bioconvergence”: Helena recently released their report on AI-enabled biology, examining the intersection of AI and biotechnology, and its potential impacts on biosecurity.

  • A new Turing test: In the MIT Technology Review, Mustafa Suleyman proposes a new way to measure the “intelligence” of AI - can it make $1 million?

  • Before (or after) you see Oppenheimer…: We recommend you read through, and share, this supplementary fact sheet produced by the International Campaign to Abolish Nuclear Weapons.

  • What we’re watching: Kurzgesagt’s short animation on bio-risk from earlier this month provides a great introduction to the topic.

Hindsight is 20/20

Trinity atmospheric nuclear test - July 1945" by The Official CTBTO Photostream is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse. 

Shortly before the release of Oppenheimer, 16 July marked the 78th anniversary of the Trinity test - the first detonation of a nuclear bomb.

The Trinity test effectively began the nuclear age, making tangible the unbelievable power to destroy that nuclear weapons introduced to humanity.

The effects of the blast, which sent a radioactive mushroom cloud high into the atmosphere, weren’t fully known until years after. A recent report suggests that its radioactive fallout made it to 46 U.S. states, along with Mexico and Canada. The consequences of this were felt most in the New Mexican communities closest to the test site; infant mortality rose 56% in New Mexico in the three months following the test, and so-called “downwinders” suffered from elevated rates of cancer for decades to come.

Trinity was only the first test of hundreds to come, and the first nuclear bomb of the estimated 13,000 that exist in the world now - many of which are far more powerful than Trinity. There’s no question that nuclear weapons are devastating, and a nuclear exchange today would be inconceivably catastrophic for a majority of the world. However, we don’t have to accept these risks.

Read more on our website here about our work in this space, and the actions necessary to safeguard our future.