- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Illustrating Superintelligence
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an eleven-minute read. Some of what we cover this month:
š The exciting results of our Superintelligence Imagined creative contest
šŗšø The White House releases its NatSec memorandum on AI
š Apply for our PhD and postdoc fellowships (plus a new fellowship track!)
š± New polling shows teensā perspectives on AI
And much more!
If you have any feedback or questions, please feel free to send them to [email protected].
Superintelligence Imagined: The Results!
š Presenting: the winners of our Superintelligence Imagined Creative Contest! š
From 180+ submissions we received, we're SO excited to share the six amazing works that won $70K in prizes - including a grand prize - for creatively depicting superintelligent AI and its risks.ā¦ x.com/i/web/status/1ā¦
ā Future of Life Institute (@FLI_org)
3:25 PM ā¢ Oct 25, 2024
Weāre thrilled to finally share the results from our Superintelligence Imagined creative contest!
With major tech companies such as Meta and OpenAI investing huge resources to develop AI that could match or exceed human abilities in most tasks, often referred to respectively as artificial general intelligence (AGI) and artificial superintelligence (ASI), we launched this contest to help educate the public on the serious risks such systems pose to humanity.
The contest, which ran from May through August, received more than 180 submissions across a wide range of mediums, including videos, games, graphic novels, short stories, and more - all intended for the general public as the primary audience.
In the end, six winners were awarded a total of $70,000 in prizes - including one $20,000 grand prize. Over the next several months, weāll highlight one winner per newsletter edition. Canāt wait? You can explore all of the winning projects, as well as seven honourable mentions, now.
First, weāre proud to present the grand prize winner: the short film āWriting Doomā by filmmaker Suzy Shepherd - a look into the writersā room of a show like Black Mirror. Check out the trailer below, and watch the full film here.
Congratulations to the winners, and many thanks to all who submitted work in the contest!
New White House NatSec Memorandum on AI
Following through on a requirement from the 2023 Executive Order on AI, the White House last week released a new National Security Memorandum on AI.
Advising national security agencies on AI procurement/usage protocols, AI cybersecurity, safety assessments, and AI capacity expansion, the Memorandum is a ācritical step toward acknowledging and addressing the risks inherent in unchecked AI development ā especially in the areas of defense, national security, and weapons of warā, according to FLI US Policy Specialist Hamza Chaudhry.
However, as Hamza also noted in his statement, the āmany commendable actions, efforts, and recommendationsā put forth in the Memorandum only represent the start of the action required to safeguard against risks from AI. Additionally - similar to what FLI President Max Tegmark cautioned against in a recent blog post on the ādelusionā of US vs. China A(G)I competition dynamics - Hamza urged: āLack of cooperation will make it harder to cultivate a stable and responsible framework to advance international AI governanceā.
āTis the Season: Apply for an FLI Fellowship!
Itās that time of year again - applications for our postdoctoral and PhD fellowships in AI safety are open! Joining our AI existential safety research fellowships this year is our new US-China AI Governance PhD fellowship program, to support research on risk reduction in US-China AI relations.
All Fellows will have applicable tuition and fees covered, and receive an annual stipend and research fund. There are no geographic limitations; we encourage applications from anywhere in the world, especially people from under-represented backgrounds. Please share with anyone who may be interested!
Current or future PhD student researching technical AI safety or US-China AI relations? The deadline for PhD fellowship applications is November 20, 2024 at 11:59 pm ET.
Current or future postdoc working on AI existential safety research? The deadline for postdoctoral fellowship applications is January 6, 2025 at 11:59 pm ET.
Updates from FLI
A photo from our October dinner with FAS in DC
The first event as part of our new 18-month, $1.5 million partnership, we recently hosted a joint dinner with the Federation of American Scientists in Washington, DC, convening policy and technical leaders on AI to talk about its potential impacts.
As part of our broader religious engagement initiative, convening and supporting the widely held but so far under-represented perspectives of traditional religions on AI risks and opportunities, FLI President Max Tegmark and Futures Program Associate William Jones took part in a forum on ethical AI in the Vatican. Max told the conference that the Catholic Church could provide much-needed moral leadership to help protect humanity from AIās risks, like they did with human cloning.
William and FLIās Director of Communications Ben Cumming also met with Dr. Chinmay Pandya in London to discuss Hindu perspectives on AI, which Dr. Pandya wrote about in a recent FLI guest post.
After the Europe visit, a cordial meeting was held in London with Mr. Ben Cumming, Director of the well-known organization @FLI_org Future of Life, and its coordinator, Mr. William Jones.
The conversation focused on exploring opportunities to collaborate on important futureā¦ x.com/i/web/status/1ā¦ā Dr. Chinmay Pandya (@DrChinmayP)
8:16 AM ā¢ Oct 19, 2024
Max joined Patrick Bet-Davidās podcast for an episode on the future of AI.
We proudly supported the new Wise Ancestors Platform, which was built to "crowd-fund and coordinate decentralized genomic research tied to upfront benefit-sharing" through Conservation Challenges developed with Indigenous peoples and local communities.
Intersecting with the Francophonie Summit, and with the upcoming 2025 French AI Action Summit in mind, we released an open letter signed by numerous Francophone experts highlighting the dangers of AI foundation modelsā lack of linguistic and cultural diversity.
Speaking of the AI Action Summit, FLIās AI Summit Lead Ima Bello hosted her third AI Safety Breakfast, featuring a virtual conversation with āAI Godfatherā Yoshua Bengio. Watch the recording here, where you can also find recordings of the previous breakfasts with Stuart Russell and Charlotte Stix.
āHinton and Hopfield embody the potential of AI to grant incredible benefits to all of humanity ā but only if the technology is developed safely and securely. [ā¦] Innovations like [Hassabisā and Jumperās] AlphaFold reveal the incredible benefits if we develop narrow AI and existing general-purpose systems safely to solve specific problems ā instead of racing to deploy increasingly powerful and risky models that we donāt understand.ā
As in the quote above, FLI Executive Director Anthony Aguirre shared his congratulations for this yearās Nobel laureates in Physics (AI pioneers Geoffrey Hinton and John Hopfield, both of whom have been outspoken about AI risks) and Chemistry (Demis Hassabis, Prof. David Baker, and Dr. John Jumper) respectively.
On the FLI podcast, economist Tamay Besiroglu joined host Gus Docker for a conversation looking ahead to what the next five years with AI may bring in terms of scaling, capabilities, the economics of AI, and more.
Andrea Miotti, Executive Director of Control AI, joined Gus for a discussion of āA Narrow Pathā, the new roadmap he co-authored outlining a path for humanity to safe, transformative AI.
Be sure to also check out āThe Compendiumā, the newly-released counterpart to āA Narrow Pathā, which lays out the actors recklessly pushing AGI on humanity.
What Weāre Reading
Tech lobby Ctrl+Alt+Repeat: This New Yorker article profiles OpenAIās new head of global affairs, Chris Lehane, and his role in Silicon Valleyās very well-resourced transformation into one of Americaās most ferocious political operations.
SB 1047 Veto Fallout: Much criticism has followed in the month since California Governor Gavin Newsom vetoed SB 1047. For example, filmmakers Joseph Gordon-Levitt and Mark Ruffalo wrote in TIME: āLet this veto serve as a call for activists to assemble. [ā¦] Next time legislation like this comes up for a vote, we will fight in greater numbers to make our government work for everyone, not just for big business.ā
Future-Focused Youth: The new Center for Youth and AI released the fascinating, but not entirely surprising, results from a poll of American teens on AI. As a generation whose future will be shaped by this technology, respondents overwhelmingly want lawmakers to address AIās risks, considering this as pressing as issues like social inequality and climate change. Those polled were most concerned about AI-generated misinformation and deepfakes, with half of them also concerned about the potential for autonomous AI to escape human control.
ā¦ And Future-Focused Elders, Too: Following the UN Summit of the Future, our friends at The Elders released a statement calling for the UN and its member states to continue pushing forth inclusive, strong global governance of AI.
Halloween Fright, but Real: The International Institute for Management Development (IMD) has released an AI Safety Clock, akin to the Bulletin of the Atomic Scientistsā Doomsday Clock. The Clock will measure and report how close humanity is to the risks of uncontrolled AGI. According to IMD, weāre currently just 29 minutes away.