- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Future of Life Institute Newsletter: Tool AI > Uncontrollable AGI
Max Tegmark on AGI vs. Tool AI; magazine covers from a future with superintelligence; join our new digital experience as a beta tester; and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an 11-minute read. Some of what we cover this month:
đ¨ Why we should build Tool AI, not artificial general intelligence
đźď¸ A glimpse into a future shaped by superintelligence
𤍠Beta testing opportunity!
đşđ¸ Looking at the role of AI and deepfakes in the U.S. election
And much more!
If you have any feedback or questions, please feel free to send them to [email protected].
Manhattan Project for AGI
âRemember when I came to you with those calculations, we thought we might start a chain reaction that would destroy the entire world? I believe we did.â
In a recent report, U.S. Congressâ U.S.-China Economic and Security Review Commission recommended âCongress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capabilityâ - in opposition to countless expertsâ warnings about the risks of AGI.
FLI President Max Tegmark didnât hold back when sharing his thoughts on the proposal:
âAn AGI race is a suicide race. The proposed AGI Manhattan project, and the fundamental misunderstanding that underpins it, represents an insidious growing threat to US national security. Any system better than humans at general cognition and problem solving would by definition be better than humans at AI research and development, and therefore able to improve and replicate itself at a terrifying rate. The worldâs pre-eminent AI experts agree that we have no way to predict or control such a system, and no reliable way to align its goals and values with our own.â
âIn a competitive race, there will be no opportunity to solve the unsolved technical problems of control and alignment, and every incentive to cede decisions and power to the AI itself. The almost inevitable result would be an intelligence far greater than our own that is not only inherently uncontrollable, but could itself be in charge of the very systems that keep the United States secure and prosperous. Our critical infrastructure â including nuclear and financial systems â would have little protection against such a system. As AI Nobel Laureate Geoff Hinton said last month âOnce the artificial intelligences get smarter than we are, they will take control.ââ
Instead of racing to AGI, Max joins other AI experts in calling for government and the tech industry to develop âgame-changingâ Tool AI - offering the specific benefits of advanced AI, without the catastrophic risks. Learn more in Maxâs full statement, and his WebSummit talk on Tool AI:
Here's why we should build awesome #ToolAI, not #AGI, which is unnecesary and so uncontrollable that China won't want it either. This version has legible slides. @WebSummit
â Max Tegmark (@tegmark)
5:43 PM ⢠Nov 23, 2024
Spotlight onâŚ
Weâre excited to share another winning entry from our Superintelligence Imagined Creative Contest!
As we announced in the last edition, out of 180+ submissions we selected six winners (including one grand prize winner) and seven runners-up. Weâll feature one per edition - this month, weâre delighted to present Effctâs winning poster series, â6 Magazine Covers from the Future: Warnings About the Dangers of Artificial (Super)intelligenceâ. Providing a glimpse into what the near future could realistically look like if superintelligence is developed, this series is an eery showcase of what could await us - both good and bad - if the race to AGI continues:
The projectâs authors had this to say: âMagazine covers capture critical moments in history. Our project showcases covers in the future depicting the rise of artificial superintelligence (ASI) and the existential threats it poses. Each cover is paired with descriptions and works cited, ensuring scientific accuracy. We explore ASI's potential and grave dangers, urging a global conversation on aligning ASI with human values. This work targets policymakers, technologists, and the public to inspire action and shape our shared future.â
Want to see the other results? You donât have to wait until the next newsletter! You can explore all of the winning projects and honourable mentions now.
Are you ready to meet Percey?
Weâve been working on an exciting new creative digital experience, and weâre almost ready to share it with the world.
But before we do, we need your help!
Weâre inviting a select group of beta testers to get exclusive early access to our project. As a beta tester, youâll:
Be among the first to engage with our new interactive digital experience;
Help shape the final experience by sharing your valuable feedback;
Be the first to share it with your network, if youâd like;
And have your creative work featured, if you choose.
Interested? đ Sign up now at this link to join the beta testing team.
Spots are limited - if youâre interested, be sure to sign up as soon as possible. We canât wait for you to meet PerceyâŚ
Updates from FLI
Weâre now on Bluesky! Youâll also continue to find us on LinkedIn, X, Instagram, Facebook, Youtube, and TikTok.
Applications for our postdoctoral fellowships on AI existential safety research are still open! Apply by January 6, 2025 at 11:59 pm ET.
William Jones, FLI Futures Program Associate, attended a meeting of religious leaders in Abuja, Nigeria which FLI was honoured to help organize. Participants discussed AI's impact on religious traditions and broader societal issues.
Fellow Futures Program Associate Isabella Hampton helped coordinate a live event with Liv Boereeâs Win-Win Podcast. The event featured Liv and nuclear energy advocate Isabelle Boemeke discussing lessons from nuclear energy that could be applied to AI. Stay tuned for clips from the recording!
WE DID OUR FIRST WIN-WIN LIVE EVENT!!!!
A Nuclear â˘ď¸ & AI đ¤ mashup - I chatted with @isabelleboemeke about what went right & wrong with the nuclear industry, and how those lessons could help guide humanityâs transition to the AI age.
We recorded it so will share some clips⌠x.com/i/web/status/1âŚ
â Liv Boeree (@Liv_Boeree)
7:24 PM ⢠Nov 23, 2024
FLIâs AI Summit Lead Ima Bello hosted the fourth AI Safety Breakfast, in Paris with algorithmic ethics pioneer Dr. Rumman Chowdhury. The recording will shortly be available here, where you can also find recordings of the previous breakfasts with Yoshua Bengio, Stuart Russell, and Charlotte Stix.
Ima also released the fourth edition of her AI Action Summit Substack, available here.
At WebSummit, FLIâs Max Tegmark spoke to Fast Company on a wide range of topics, from how to regulate AI to what we can expect from Trump on AI in his second term.
Max chatted with The Guardian about how Elon Musk may impact Trumpâs approach to AI.
Max also spoke to Euronews on this, and about the potential for a âgame over for humanityâ scenario from AGI.
FLI's Futures Program Director Emilia Javorsky spoke to The Overview about how AI could worsen power concentration.
Emilia also gave a speech as part of WebSummit roundtables, on âPathways to positive AI futuresâ.
FLIâs Communications Director Ben Cumming participated in a panel at the FT Live Future of AI event in London, speaking about âAI on the world stage - A new battleground for geopoliticsâ.
FLIâs Military AI lead, Anna Hehir, spoke to Undark about the future of autonomous weapons systems.
Also on the topic of autonomous weapons systems, we published the seventh edition of The Autonomous Weapons Newsletter, covering AWS under Trump, UNGA news, and more.
Talking to the Financial Times about deepfakes, Max shared, âI canât think of any other technological issue where there is such bipartisan agreement, and yet we still donât have any meaningful legislationâ.
On the FLI podcast, Conjecture CEO Connor Leahy joined host Gus Docker for a conversation on how AGI puts us all at risk, the motivations of companies pursuing AGI, what we can do about it, and more.
Also on the podcast, filmmaker Suzy Shepherd joined for a conversation about visualizing superintelligence, and her Superintelligence Imagined grand prize-winning short film, âWriting Doomâ.
What Weâre Reading
âCrisis of authenticityâ: The Institute for Strategic Dialogue has released a report on the role that AI played in the recent U.S. election, referring to an erosive effect wherein âthe rapid increase of AI-generated content has created a fundamentally polluted information ecosystem where voters are struggling to assess contentâs authenticity and increasingly beginning to assume authentic content to be AI generated.â
Americans donât trust AI corps: A majority of Americans believe AI safety testing is more important than U.S.-China competition, that AI companies canât be trusted to self-police and require more regulation, and that AI safety testing should be mandatory, according to new polling from the AI Policy Institute.
Asking the difficult questions: In an interview with CNBC, AI pioneer Yoshua Bengio explains why humanity needs regulation of AI amidst unanswered questions like âif we create entities that are smarter than us, and have their own goals, what does that mean for humanity?â.
What Weâre Watching: âAn Inconvenient Doomâ, this excellent new documentary explaining AGI and the risks it presents.