Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI

Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an 11-minute read. Some of what we cover this month:

  • 🔨 Why we should build Tool AI, not artificial general intelligence

  • 🖼️ A glimpse into a future shaped by superintelligence

  • 🤫 Beta testing opportunity!

  • 🇺🇸 Looking at the role of AI and deepfakes in the U.S. election

And much more!

If you have any feedback or questions, please feel free to send them to [email protected].

Manhattan Project for AGI

“Remember when I came to you with those calculations, we thought we might start a chain reaction that would destroy the entire world? I believe we did.”

In a recent report, U.S. Congress’ U.S.-China Economic and Security Review Commission recommended “Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability” - in opposition to countless experts’ warnings about the risks of AGI.

FLI President Max Tegmark didn’t hold back when sharing his thoughts on the proposal:

“An AGI race is a suicide race. The proposed AGI Manhattan project, and the fundamental misunderstanding that underpins it, represents an insidious growing threat to US national security. Any system better than humans at general cognition and problem solving would by definition be better than humans at AI research and development, and therefore able to improve and replicate itself at a terrifying rate. The world’s pre-eminent AI experts agree that we have no way to predict or control such a system, and no reliable way to align its goals and values with our own.”

“In a competitive race, there will be no opportunity to solve the unsolved technical problems of control and alignment, and every incentive to cede decisions and power to the AI itself. The almost inevitable result would be an intelligence far greater than our own that is not only inherently uncontrollable, but could itself be in charge of the very systems that keep the United States secure and prosperous. Our critical infrastructure – including nuclear and financial systems – would have little protection against such a system. As AI Nobel Laureate Geoff Hinton said last month ‘Once the artificial intelligences get smarter than we are, they will take control.’”

Instead of racing to AGI, Max joins other AI experts in calling for government and the tech industry to develop “game-changing” Tool AI - offering the specific benefits of advanced AI, without the catastrophic risks. Learn more in Max’s full statement, and his WebSummit talk on Tool AI:

Spotlight on…

We’re excited to share another winning entry from our Superintelligence Imagined Creative Contest!

As we announced in the last edition, out of 180+ submissions we selected six winners (including one grand prize winner) and seven runners-up. We’ll feature one per edition - this month, we’re delighted to present Effct’s winning poster series, “6 Magazine Covers from the Future: Warnings About the Dangers of Artificial (Super)intelligence”. Providing a glimpse into what the near future could realistically look like if superintelligence is developed, this series is an eery showcase of what could await us - both good and bad - if the race to AGI continues:

The project’s authors had this to say: “Magazine covers capture critical moments in history. Our project showcases covers in the future depicting the rise of artificial superintelligence (ASI) and the existential threats it poses. Each cover is paired with descriptions and works cited, ensuring scientific accuracy. We explore ASI's potential and grave dangers, urging a global conversation on aligning ASI with human values. This work targets policymakers, technologists, and the public to inspire action and shape our shared future.”

Want to see the other results? You don’t have to wait until the next newsletter! You can explore all of the winning projects and honourable mentions now.

Are you ready to meet Percey?

We’ve been working on an exciting new creative digital experience, and we’re almost ready to share it with the world.

But before we do, we need your help!

We’re inviting a select group of beta testers to get exclusive early access to our project. As a beta tester, you’ll:

  • Be among the first to engage with our new interactive digital experience;

  • Help shape the final experience by sharing your valuable feedback;

  • Be the first to share it with your network, if you’d like;

  • And have your creative work featured, if you choose.

Interested? 👉 Sign up now at this link to join the beta testing team.

Spots are limited - if you’re interested, be sure to sign up as soon as possible. We can’t wait for you to meet Percey…

Updates from FLI

  • We’re now on Bluesky! You’ll also continue to find us on LinkedIn, X, Instagram, Facebook, Youtube, and TikTok.

  • Applications for our postdoctoral fellowships on AI existential safety research are still open! Apply by January 6, 2025 at 11:59 pm ET.

  • William Jones, FLI Futures Program Associate, attended a meeting of religious leaders in Abuja, Nigeria which FLI was honoured to help organize. Participants discussed AI's impact on religious traditions and broader societal issues.

  • Fellow Futures Program Associate Isabella Hampton helped coordinate a live event with Liv Boeree’s Win-Win Podcast. The event featured Liv and nuclear energy advocate Isabelle Boemeke discussing lessons from nuclear energy that could be applied to AI. Stay tuned for clips from the recording!

  • FLI’s AI Summit Lead Ima Bello hosted the fourth AI Safety Breakfast, in Paris with algorithmic ethics pioneer Dr. Rumman Chowdhury. The recording will shortly be available here, where you can also find recordings of the previous breakfasts with Yoshua Bengio, Stuart Russell, and Charlotte Stix.

    • Ima also released the fourth edition of her AI Action Summit Substack, available here.

  • At WebSummit, FLI’s Max Tegmark spoke to Fast Company on a wide range of topics, from how to regulate AI to what we can expect from Trump on AI in his second term.

  • Max chatted with The Guardian about how Elon Musk may impact Trump’s approach to AI.

    • Max also spoke to Euronews on this, and about the potential for a “game over for humanity” scenario from AGI.

  • FLI's Futures Program Director Emilia Javorsky spoke to The Overview about how AI could worsen power concentration.

  • Emilia also gave a speech as part of WebSummit roundtables, on “Pathways to positive AI futures”.

  • FLI’s Communications Director Ben Cumming participated in a panel at the FT Live Future of AI event in London, speaking about “AI on the world stage - A new battleground for geopolitics”.

  • FLI’s Military AI lead, Anna Hehir, spoke to Undark about the future of autonomous weapons systems.

  • Also on the topic of autonomous weapons systems, we published the seventh edition of The Autonomous Weapons Newsletter, covering AWS under Trump, UNGA news, and more.

  • Talking to the Financial Times about deepfakes, Max shared, “I can’t think of any other technological issue where there is such bipartisan agreement, and yet we still don’t have any meaningful legislation”.

  • On the FLI podcast, Conjecture CEO Connor Leahy joined host Gus Docker for a conversation on how AGI puts us all at risk, the motivations of companies pursuing AGI, what we can do about it, and more.

  • Also on the podcast, filmmaker Suzy Shepherd joined for a conversation about visualizing superintelligence, and her Superintelligence Imagined grand prize-winning short film, “Writing Doom”.

What We’re Reading

  • ‘Crisis of authenticity’: The Institute for Strategic Dialogue has released a report on the role that AI played in the recent U.S. election, referring to an erosive effect wherein “the rapid increase of AI-generated content has created a fundamentally polluted information ecosystem where voters are struggling to assess content’s authenticity and increasingly beginning to assume authentic content to be AI generated.”

  • Americans don’t trust AI corps: A majority of Americans believe AI safety testing is more important than U.S.-China competition, that AI companies can’t be trusted to self-police and require more regulation, and that AI safety testing should be mandatory, according to new polling from the AI Policy Institute.

  • Asking the difficult questions: In an interview with CNBC, AI pioneer Yoshua Bengio explains why humanity needs regulation of AI amidst unanswered questions like “if we create entities that are smarter than us, and have their own goals, what does that mean for humanity?”.

  • What We’re Watching: “An Inconvenient Doom”, this excellent new documentary explaining AGI and the risks it presents.