Future of Life Institute Newsletter: Illustrating Superintelligence

Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an eleven-minute read. Some of what we cover this month:

  • šŸ… The exciting results of our Superintelligence Imagined creative contest

  • šŸ‡ŗšŸ‡ø The White House releases its NatSec memorandum on AI

  • šŸ“š Apply for our PhD and postdoc fellowships (plus a new fellowship track!)

  • šŸŒ± New polling shows teensā€™ perspectives on AI

And much more!

If you have any feedback or questions, please feel free to send them to [email protected].

Superintelligence Imagined: The Results!

Weā€™re thrilled to finally share the results from our Superintelligence Imagined creative contest!

With major tech companies such as Meta and OpenAI investing huge resources to develop AI that could match or exceed human abilities in most tasks, often referred to respectively as artificial general intelligence (AGI) and artificial superintelligence (ASI), we launched this contest to help educate the public on the serious risks such systems pose to humanity.

The contest, which ran from May through August, received more than 180 submissions across a wide range of mediums, including videos, games, graphic novels, short stories, and more - all intended for the general public as the primary audience.

In the end, six winners were awarded a total of $70,000 in prizes - including one $20,000 grand prize. Over the next several months, weā€™ll highlight one winner per newsletter edition. Canā€™t wait? You can explore all of the winning projects, as well as seven honourable mentions, now.

First, weā€™re proud to present the grand prize winner: the short film ā€œWriting Doomā€ by filmmaker Suzy Shepherd - a look into the writersā€™ room of a show like Black Mirror. Check out the trailer below, and watch the full film here.

Congratulations to the winners, and many thanks to all who submitted work in the contest!

New White House NatSec Memorandum on AI

Following through on a requirement from the 2023 Executive Order on AI, the White House last week released a new National Security Memorandum on AI.

Advising national security agencies on AI procurement/usage protocols, AI cybersecurity, safety assessments, and AI capacity expansion, the Memorandum is a ā€œcritical step toward acknowledging and addressing the risks inherent in unchecked AI development ā€” especially in the areas of defense, national security, and weapons of warā€, according to FLI US Policy Specialist Hamza Chaudhry.

However, as Hamza also noted in his statement, the ā€œmany commendable actions, efforts, and recommendationsā€ put forth in the Memorandum only represent the start of the action required to safeguard against risks from AI. Additionally - similar to what FLI President Max Tegmark cautioned against in a recent blog post on the ā€œdelusionā€ of US vs. China A(G)I competition dynamics - Hamza urged: ā€œLack of cooperation will make it harder to cultivate a stable and responsible framework to advance international AI governanceā€.

ā€˜Tis the Season: Apply for an FLI Fellowship!

Itā€™s that time of year again - applications for our postdoctoral and PhD fellowships in AI safety are open! Joining our AI existential safety research fellowships this year is our new US-China AI Governance PhD fellowship program, to support research on risk reduction in US-China AI relations.

All Fellows will have applicable tuition and fees covered, and receive an annual stipend and research fund. There are no geographic limitations; we encourage applications from anywhere in the world, especially people from under-represented backgrounds. Please share with anyone who may be interested!

Current or future PhD student researching technical AI safety or US-China AI relations? The deadline for PhD fellowship applications is November 20, 2024 at 11:59 pm ET.

Current or future postdoc working on AI existential safety research? The deadline for postdoctoral fellowship applications is January 6, 2025 at 11:59 pm ET.

Updates from FLI

A photo from our October dinner with FAS in DC

  • The first event as part of our new 18-month, $1.5 million partnership, we recently hosted a joint dinner with the Federation of American Scientists in Washington, DC, convening policy and technical leaders on AI to talk about its potential impacts.

  • As part of our broader religious engagement initiative, convening and supporting the widely held but so far under-represented perspectives of traditional religions on AI risks and opportunities, FLI President Max Tegmark and Futures Program Associate William Jones took part in a forum on ethical AI in the Vatican. Max told the conference that the Catholic Church could provide much-needed moral leadership to help protect humanity from AIā€™s risks, like they did with human cloning.

  • William and FLIā€™s Director of Communications Ben Cumming also met with Dr. Chinmay Pandya in London to discuss Hindu perspectives on AI, which Dr. Pandya wrote about in a recent FLI guest post.

  • Max joined Patrick Bet-Davidā€™s podcast for an episode on the future of AI.

  • We proudly supported the new Wise Ancestors Platform, which was built to "crowd-fund and coordinate decentralized genomic research tied to upfront benefit-sharing" through Conservation Challenges developed with Indigenous peoples and local communities.

  • Intersecting with the Francophonie Summit, and with the upcoming 2025 French AI Action Summit in mind, we released an open letter signed by numerous Francophone experts highlighting the dangers of AI foundation modelsā€™ lack of linguistic and cultural diversity.

  • Speaking of the AI Action Summit, FLIā€™s AI Summit Lead Ima Bello hosted her third AI Safety Breakfast, featuring a virtual conversation with ā€œAI Godfatherā€ Yoshua Bengio. Watch the recording here, where you can also find recordings of the previous breakfasts with Stuart Russell and Charlotte Stix.

ā€œHinton and Hopfield embody the potential of AI to grant incredible benefits to all of humanity ā€“ but only if the technology is developed safely and securely. [ā€¦] Innovations like [Hassabisā€™ and Jumperā€™s] AlphaFold reveal the incredible benefits if we develop narrow AI and existing general-purpose systems safely to solve specific problems ā€“ instead of racing to deploy increasingly powerful and risky models that we donā€™t understand.ā€

  • As in the quote above, FLI Executive Director Anthony Aguirre shared his congratulations for this yearā€™s Nobel laureates in Physics (AI pioneers Geoffrey Hinton and John Hopfield, both of whom have been outspoken about AI risks) and Chemistry (Demis Hassabis, Prof. David Baker, and Dr. John Jumper) respectively.

  • On the FLI podcast, economist Tamay Besiroglu joined host Gus Docker for a conversation looking ahead to what the next five years with AI may bring in terms of scaling, capabilities, the economics of AI, and more.

  • Andrea Miotti, Executive Director of Control AI, joined Gus for a discussion of ā€œA Narrow Pathā€, the new roadmap he co-authored outlining a path for humanity to safe, transformative AI.

    • Be sure to also check out ā€œThe Compendiumā€, the newly-released counterpart to ā€œA Narrow Pathā€, which lays out the actors recklessly pushing AGI on humanity.

What Weā€™re Reading

  • Tech lobby Ctrl+Alt+Repeat: This New Yorker article profiles OpenAIā€™s new head of global affairs, Chris Lehane, and his role in Silicon Valleyā€™s very well-resourced transformation into one of Americaā€™s most ferocious political operations.

  • SB 1047 Veto Fallout: Much criticism has followed in the month since California Governor Gavin Newsom vetoed SB 1047. For example, filmmakers Joseph Gordon-Levitt and Mark Ruffalo wrote in TIME: ā€œLet this veto serve as a call for activists to assemble. [ā€¦] Next time legislation like this comes up for a vote, we will fight in greater numbers to make our government work for everyone, not just for big business.ā€

  • Future-Focused Youth: The new Center for Youth and AI released the fascinating, but not entirely surprising, results from a poll of American teens on AI. As a generation whose future will be shaped by this technology, respondents overwhelmingly want lawmakers to address AIā€™s risks, considering this as pressing as issues like social inequality and climate change. Those polled were most concerned about AI-generated misinformation and deepfakes, with half of them also concerned about the potential for autonomous AI to escape human control.

  • ā€¦ And Future-Focused Elders, Too: Following the UN Summit of the Future, our friends at The Elders released a statement calling for the UN and its member states to continue pushing forth inclusive, strong global governance of AI.

  • Halloween Fright, but Real: The International Institute for Management Development (IMD) has released an AI Safety Clock, akin to the Bulletin of the Atomic Scientistsā€™ Doomsday Clock. The Clock will measure and report how close humanity is to the risks of uncontrolled AGI. According to IMD, weā€™re currently just 29 minutes away.