Future of Life Institute Newsletter: Recommendations for the AI Action Plan

Plus: FLI Executive Director's new essay on keeping the future human; "Slaughterbots: A treaty on the horizon"; apply to our new Digital Media Accelerator; and more!

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a 12-minute read. Some of what we cover this month:

  • šŸ‡ŗšŸ‡ø Our U.S. AI Action Plan recommendations

  • āœ‹ Keep the Future Human essay

  • šŸ“ŗ New Slaughterbots video

  • šŸ¤³ Our new Digital Media Accelerator

  • šŸŽØ Superintelligence Imagined: the final winner!

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

FLIā€™s Recommendations for the U.S. AI Action Plan

Weā€™ve published our recommendations for President Trumpā€™s AI Action Plan, which focus on protecting U.S. interests in the era of rapidly advancing AI.

An overview of the measures we recommend:

  • Protect the presidency from loss of control by mandating ā€œoff-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures.

  • Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities.

  • Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models.

  • Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting.

You can read our proposals in full here. Our recommendations were also covered last week on the New York Timesā€™ Hard Fork podcast and in the Politico US newsletter (see below).

Keep the Future Human: Why We Must Close the Gates to AGI

As general-purpose AI models begin to rival human intelligence, we are fast approaching a critical inflection point: continue on the current path - a geopolitical and corporate race towards uncontrollable artificial general intelligence (AGI) and ultimately superintelligence - or change course in favour of a future with humans empowered by AI, rather than replaced.

In FLI Executive Director Anthony Aguirreā€™s new essay, ā€œKeep the Future Humanā€, available also in this helpful interactive summary format, he outlines the risks presented by AGI (which he defines as the convergence of autonomy, generality, and intelligence), and how we can instead create a safer future with powerful Tool AI.

Anthonyā€™s explanation of ā€œAGIā€, in Keep the Future Human.

The time to act is now - before irreversible thresholds are crossed. Letā€™s close the gates to AGI, and keep the future in human hands. Learn more about Anthonyā€™s practical proposal in his full essay here, the interactive summary here, and in the following videos - including a special episode of the FLI Podcast:

Slaughterbots: A Treaty on the Horizon

Weā€™ve released the newest video in our Slaughterbots series, this time focusing on the critical fight to place limits on autonomous weapons systems.

With a small minority of states deploying these weapons, with no rules to follow, the call for action has never been more urgent. The latest Slaughterbots video dives into the growing international movement to prohibit the most dangerous autonomous weapons and ensure meaningful human control over others.

With a majority of UN states now backing a legally binding treaty, and states meeting at the UN in New York in May to begin working towards this, will your country support clear, enforceable rules on autonomous weapons systems?

ā–¶ļø Watch ā€œSlaughterbots and the Urgent Fight to Stop Themā€ here.
ā–¶ļø Watch past Slaughterbots videos here.
šŸ“˜ Learn more about autonomous weapons and the push for regulation here.
šŸ“° Subscribe to stay up-to-date on the latest AWS policy and tech news here.

Apply to our new Digital Media Accelerator!

As AI companies rush to build ever more powerful systems with little oversight, humanity is being pushed toward an uncertain and risky future, all while public awareness about A(G)I risk remains limited.

Weā€™ve just launched our Digital Media Accelerator to fund creators looking to produce content, grow channels, and spread the word to new audiences about complex AI issues (e.g., loss of control to AGI, misaligned goals, and Big Tech power concentration) to new audiences. Weā€™re looking to fund content across platforms, such as YouTube explainers, TikTok series, podcasts, newsletters, and more.

Already have an audience? Want to create compelling content about AI risks?
Weā€™re accepting applications on a rolling basis - apply here and help shift the conversation. Please share widely with anyone you think may be interested!

Spotlight onā€¦

Weā€™re excited to present the final winning entry from our Superintelligence Imagined creative contest!

This month, weā€™re featuring graphic novel ZARA MARS from Marcus Eriksen, Vanessa Morrow, and Alberto Hidalgo.

As they described it, ā€œIn ZARA MARS, itā€™s 2052 and we find Captain Zara and her crew on a 9-month journey home from Mars. Through her mindlink, Captain Zara converses with the ships sentient ASI agent, named SarvajƱa, discussing the risk and reward of superintelligence, its evolution, values alignment, malicious actors and rogue AI, existential threats, AI benevolence and sentience.ā€

Read it now here, and take a look at the other winning and runner-up entries here!

Updates from FLI

  • Executive Director Anthony Aguirre joined Axios for a talk at SXSW, on AGI vs. Tool AI:

  • Guest author Sarah Hastings-Woodhouse published a new article on the FLI blog about the potential for an intelligence explosion, and how close we may be to it.

  • FLIā€™s AI & National Security Lead Hamza Chaudhry spoke to Fortune Magazine about OpenAIā€™s new approach to AI Safety & Alignment, calling it "reckless experimenting on the publicā€.

  • On the FLI podcast, physicist and hedge fund manager Samir Varma joined host Gus Docker to discuss if AIs have consciousness, AI psychology, trading with AIs, and more.

  • For a bonus special episode, the FLI Podcast featured an interview on the Cognitive Revolution podcast between host Nathan Labenz and Google DeepMind security researcher Nicholas Carlini.

What Weā€™re Reading

  • The ā€œevidence-basedā€ trap: FLI PhD Fellow Stephen Casper, along with Dylan Hadfield-Menell and David Krueger, have published a fascinating new paper on the pitfalls of evidence-based AI policy. As they described it, ā€œEvidence is of irreplaceable value to policymaking. However, there are systematic biases shaping the evidence that the AI community produces. Holding regulation to too high an evidentiary standard can lead to systematic neglect of certain risks. If the goal is evidence-based AI policy, the first regulatory objective must be to actively facilitate the process of identifying, studying, and deliberating about AI risks.ā€

  • Gov. Newsomā€™s working group on AI: The draft report on AI frontier models, requested by California Gov. Gavin Newsom, is out now - including, among other proposals, calls for increased transparency, whistleblower protections, and industry accountability.

  • What Weā€™re Watching: InsideAI released a YouTube video documenting their experiment replacing all of their relationships with AI for two weeks - and showing the potential impact of AI on relationships and community.