- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: 'Imagine A World' is out today!
Future of Life Institute Newsletter: 'Imagine A World' is out today!
New FLI podcast series 'Imagine A World' explores positive and plausible futures
Welcome to the Future of Life Institute newsletter. Every month, we bring 40,000+ subscribers the latest news on how emerging technologies are transforming our world - for better and worse.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe here?
Today's newsletter is a 7-minute read. We cover:
đ» Imagine A World is out today!
đ New polling shows overwhelming AI concerns and support for regulation
đïž The UK AI Safety Summit is two months away - will China be invited?
đ©âđ» PhD and postdoc fellowship applications are now open
Our new podcast series âImagine A Worldâ is out today!
Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of Artificial Intelligence?
Imagine A World is a podcast series exploring a range of plausible and positive futures with advanced AI. We interview the creators of eight diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.
The first two episodes are available now on all major podcast platforms, and weâll be releasing a new episode each week. Peace Through Prophecy explores how new governance mechanisms could help us to coordinate, while Crossing Points unpacks the importance of designing and building AI in an inclusive way.
Listen to them now on YouTube, Spotify, or Apple Podcasts, and be sure to like and subscribe!
New U.S. Polling Finds Widespread Concern About AI
Recent polling from the new Artificial Intelligence Policy Institute (AIPI) and YouGov found that an overwhelming majority of American voters are deeply concerned about AI and support regulation.
Another poll from Pew Research Center highlighted how much AI anxiety is increasing among the American public, and how quickly.
In particular:
86% of voters believe AI could accidentally cause a catastrophic event;
70% agree that mitigating extinction risk from AI should be a global priority alongside risks such as pandemics and nuclear war;
82% believe that we should move more slowly and deliberately with AI development;
The number of people mostly concerned about AI in daily life has increased by 14% in just nine months.
The results reveal bipartisan support for guardrails on AI development. 64% of voters âsupport a policy that would require any organization producing advanced AI models to obtain a license, that all advanced models be evaluated for safety, and that all models must be audited by independent expertsâ.
Apply Now for our Vitalik Buterin PhD and Postdoctoral Fellowships!
We are delighted to announce that applications are back open for our PhD and postdoctoral fellowships focused on AI existential safety research. The fellowship is global and open to all regardless of nationality or background; we are seeking a diverse applicant pool. All Fellows will receive applicable tuition and fees, as well as a stipend and research/travel fund.
Current or future PhD student intending to research AI existential safety? The deadline for PhD fellowship applications is 16 November 2023 at 11:59 pm ET.
Current or future postdoc working on AI existential safety research? The deadline for postdoctoral fellowship applications is 2 January 2024 at 11:59 pm ET.
Governance and Policy Updates
AI policy:
UN Secretary-General AntĂłnio Guterres has announced the creation of a multi-stakeholder High-level Advisory Body on AI, in line with the UN Roadmap for Digital Cooperation.
FLI participated in the creation of the Roadmap, and helped put forward the recommendation that such a body be created.
The UKâs AI Safety Summit will take place 1-2 November, convening international governments and stakeholders to discuss action needed to centre safety in AI development.
While itâs not confirmed whom will be invited, as FLI President Max Tegmark recently discussed, the Summitâs success is contingent on the inclusion of China - it could represent a âturning pointâ on AI safety.
In advance of the Summit, the House of Commons Science, Innovation and Technology Committee recently released a report urging the UK government to act quickly on AI regulation, citing twelve main risks presented by the technology - including extinction, privacy breaches, national security threats, and more.
Nuclear weapons:
In a rare move, more than 100 medical journals shared a joint call-to-action, urging the health community to work together to âreduce the risks of nuclear war and to eliminate nuclear weaponsâ.
The UN International Day against Nuclear Tests on 29 August was met with renewed calls for the 1996 Comprehensive Nuclear-Test-Ban Treaty to come into force. Unfortunately, eight countries have yet to ratify it, leaving it at a standstill.
Updates from FLI
In the Bulletin of the Atomic Scientists, FLIâs Dr. Emilia Javorsky and Hamza Chaudhry wrote an in-depth examination of AI convergence risks - the ways in which AI can amplify and accelerate risks from other technologies.
FLI co-founder and tech pioneer Jaan Tallinn appeared on Al Jazeeraâs The Bottom Line to discuss the risk to humanity presented by AI with host Steve Clemons.
FLI President Max Tegmark was profiled in The Wall Street Journal, discussing the âdangerous raceâ among AI labs, and his shift from optimism about AI to alarm.
âWhy are we jeopardizing the future like this when it can be so great?â
On the FLI podcast, host Gus Docker spoke to social scientist and GovAI International Governance Lead Robert Trager on the need for, and challenges of, international AI governance.
New Research: Hypnotizing AI for harm
Finding new vulnerabilities: IBM released new research on large language models (LLMs) and cybersecurity, demonstrating how remarkably easy it is to circumvent LLM safety guardrails simply through language, by âhypnotizingâ them in the context of playing a game.
Why this matters: This is only one example of a plethora of research thatâs come out recently exposing the weaknesses of widely-used LLMs. Researchers are particularly concerned that these vulnerabilities will empower bad actors who wish to use LLMs to collect user data, spread disinformation, or obtain knowledge that could be used to cause harm.
What Weâre Reading
A proposal for AI governance: You may have heard this discussed on the FLI podcast - recent guest Robert Trager co-authored this new paper on applying a jurisdictional certification approach to international AI governance.
âOpen-washingâ AI: What does it mean for big tech to be âopenâ, in the context of AI development? David Gray Widder, Sarah West, and Meredith Whittaker examine this in a new research paper.
A DIY disinformation machine: An anonymous developer made waves by publicising their recent project on AI disinformation, having essentially created an LLM-powered anti-Russia propaganda machine that produced fake tweets, articles, and even journalists and entire news sites.
âInvisibleâ bio labs: TIME reports on privately-owned bio labs in the U.S. falling through the regulatory cracks, potentially exposing the public to great biosecurity risks.
Hiroshima & Nagasaki, 78 Years Later
6 and 9 August marked the 78th anniversary of the nuclear bombings of Hiroshima and Nagasaki respectively - the only two occasions thus far when an atomic bomb has been used in war. An estimated 215,000+ people, mostly civilians, were killed, with many more injured and/or left homeless as the two cities were effectively demolished.
Many contemporary nuclear bombs - an estimated 13,000 exist today - are several hundred times more powerful than those used on Hiroshima and Nagasaki 78 years ago. The continued threat of these weapons, an exchange of which would guarantee mass devastation, is unacceptable. We must work to ensure they are never used again.
Read more on the risks of nuclear weapons, and our work to mitigate them, here.