- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a 10-minute read. Some of what we cover this month:
🇰🇷 Reflections on the AI Seoul Summit
đź“° OpenAI news
âś… The EU AI Act is (officially, finally) final!
🗺️ Results from the Foresight Institute’s worldbuilding program
⚖️ Governance updates from around the world
The AI Seoul Summit
This month, the governments of the Republic of Korea and the UK cohosted the AI Seoul Summit, a follow-up to the UK AI Safety Summit held in November. The Summit brought together states, civil society, and industry to expand on discussions from the UK Summit, focused primarily on establishing international multi-stakeholder cooperation on safe AI innovation.
In advance of the Summit, 16 tech companies agreed on a voluntary set of AI safety commitments, but as FLI’s Director of Policy Mark Brakel told Euronews, voluntary commitments don’t go far enough - “goodwill alone is not sufficient”. We published our own recommendations to seize the unique opportunity for progress presented by the Summit, which you can read here.
The Summit concluded with a declaration signed by 27 states and the European Union, reaffirming their commitment to “collaborative international approaches” that harness AI’s benefits while mitigating its risks.
FLI President Max Tegmark was honoured to attend the Summit, with his comments on the Summit and his hopes for it featured in Reuters, TechTimes, and Fox.
Max spoke to The Guardian as well from the Summit, calling out Big Tech's work to shift attention away from existential risk presented by AI, in a distract-and-delay tactic intended to stave off meaningful regulation. Likening these efforts and industry lobbying to those employed for decades by the tobacco industry, Max again called for policymakers to take AI’s catastrophic risks seriously, and understand that we need to - and can - address both ongoing harms and escalating risks from AI.
FLI AI Safety Summit Lead Imane Bello also attended, and summarized her reflections on the Summit on X:
Some reflections on the Seoul Safety Summit last week. I think it was a success. Why? 🧵 ⬇️
— Imane Bello (@ImaneBello)
9:16 AM • May 29, 2024
OpenAI Updates
OpenAI’s safety team has been generating a lot of attention recently, following several notable departures over the past few months.
To recap: researchers Ilya Sutskever and Jan Leike, both of whom led OpenAI’s former Superalignment team - which focused on existential risks presented by AI - resigned within hours of each other earlier this month. At least six other departures have made news since the start of the year, with four of those employees having worked on the Superalignment team or AI governance in general.
Leike posted a thread on X detailing his reasons for leaving the company:
Building smarter-than-human machines is an inherently dangerous endeavor.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
— Jan Leike (@janleike)
3:57 PM • May 17, 2024
Earlier this week, OpenAI announced that they’re training a new “frontier model”, anticipating “the resulting systems to bring us to the next level of capabilities on our path to AGI”. They also announced the launch of a new Safety and Security Committee to assess and further develop OpenAI’s safety processes.
Also this week, former OpenAI board members Helen Toner and Tasha McCauley published an article in The Economist, arguing that the public sector and “prudent regulation” must be part of AI development.
Governance and Policy Updates
The EU AI Act is (finally) final! On 21 May, the Act - the world’s first comprehensive AI law - received approval from the European Council, marking the last hurdle it had to clear before becoming law. Implementation will begin soon, with it mostly coming into place over the next two years.
The EU AI Office has also just announced its structure, with current European Commission Director for AI and Digital Industry Lucilla Sioli formally leading it.
Be sure to check out FLI EU Research Lead Risto Uuk’s dedicated EU AI Act Newsletter for more details and analyses.
The UK AI Safety Institute has announced a new ÂŁ8.5 million grants programme to support research furthering systemic AI safety. Learn more here; applications to open shortly.
The U.S. Bipartisan Senate AI Working Group unveiled its long-awaited Senate AI Roadmap, outlining the current status of policy proposals intended to support safe and responsible AI innovation, and potential next steps.
FLI’s Max Tegmark responded to the Roadmap, praising “this important step forward” while urging for "more action as soon as possible”.
As the AI Seoul Summit kicked off, the U.S. Department of Commerce released their strategic vision on AI safety. They also announced the formation of a new international AI safety alliance.
The alliance will “strengthen and expand on” U.S. AI safety collaborations with the UK, Japan, Canada, Singapore, and the EU as they work to “promote safe, secure, and trustworthy artificial intelligence systems for people around the world”.
Updates from FLI
The latest in our ongoing partnership with The Elders calling for long-view leadership on the most pressing threats to humanity, The Elders Chair and former President of Ireland Mary Robinson released a statement reaffirming The Elders’ call for global governance of AI:
Mary Robinson voices concern at the lack progress on #AI safety and reaffirms The Elders call for inclusive global governance of this existential risk:
"I remain deeply concerned at the lack of progress on the global governance of artificial intelligence. Decision-making on AI’s… x.com/i/web/status/1…
— The Elders (@TheElders)
7:31 PM • May 29, 2024
We were thrilled to host an in-person event in Brussels, bringing together policymakers, civil society, industry, and academia for a discussion of the EU AI Act and the future of AI governance in Europe.
In a special episode of the FLI Podcast, FLI President Max Tegmark spoke to Christian Nunes, President of the National Organization for Women (NOW), about the growing deepfakes issue. Their conversation covered a variety of topics, including the intersection of women’s safety and deepfakes, why NOW joined the Campaign to Ban Deepfakes, and more.
Our policy team has published a new document with analyses of, and our recommended amendments to, current U.S. legislative proposals addressing deepfakes.
Emilia Javorsky, Director of FLI’s Futures program, was interviewed for a Scientific American article about the Futures program’s work imagining positive futures with AI, and how storytelling can help conceptualize, and ultimately create, such futures.
Mark Brakel, FLI’s Director of Policy, joined journalist Armen Georgian’s podcast for an episode on AI in warfare.
FLI Policy Researcher Isabella Hampton spoke on a panel about open source AI at the AI for Good Global Summit.
FLI’s Executive Director Anthony Aguirre appeared on The Foresight Institute Podcast, discussing the role of worldbuilding in imagining a future with humanity empowered by AI.
On the FLI podcast, host Gus Docker interviewed Emerj AI Research founder Dan Faggella about whether humanity should pursue AGI, open source debates, how AI will change power dynamics, and more.
What We’re Reading
Worlds to explore: The results of the Foresight Institute’s Existential Hope worldbuilding program are now available to explore on their new site! Check out the detailed worlds, in which humanity is empowered rather than disempowered by AI, that these teams have envisioned when thinking ahead to 2045.
Religious perspectives on AI: In a new initiative from our Futures program, we’re amplifying a diverse variety of religious groups’ perspectives on AI, to expand popular discourse on a topic sure to affect us all. We’re honoured to have Dr. Chinmay Pandya, a leader in All World Gayatri Pariwar, contribute the first blog post in this series, offering a Hindu perspective on AI risks and opportunities.
New AIPI Polling: From Politico, new polling from the AI Policy Institute finds that a large majority of Americans support regulation of AI training data, with a majority believing that AI companies shouldn’t be allowed to train models on public data, and must compensate creators of data they use.
What we’re watching: A new video from Digital Engine compiles statements from leaders at top AI companies, and many AI pioneers, to provide an overview of AI's catastrophic risks: