- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: The Year of Fake
Future of Life Institute Newsletter: The Year of Fake
Deepfakes are dominating headlines - with much more disruption expected, the Doomsday Clock has been set for 2024, AI governance updates, and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Kicking off the year, today's newsletter is a nine-minute read. Some of what we cover in this edition:
đź•› The 2024 Doomsday Clock announcement.
🛑 AI-powered deepfakes are already wreaking havoc in 2024.
đź“° Results from the largest-ever survey of AI researchers.
💼 We’re hiring! Learn more in our FLI Updates section.
It’s (still) 90 seconds to midnight.
The Bulletin of the Atomic Scientists has set their Doomsday Clock to 90 seconds to midnight for 2024 - the same as 2023, and indicating that humanity is the closest to global catastrophe we’ve ever been.
The Bulletin’s highly-regarded Science and Security Board called for urgent dialogue between world leaders (particularly from the U.S., China, and Russia) on the key global risk areas they identified: AI, climate change, biosecurity, and nuclear risk.
On turning back the clock (importantly, still possible!), the Board had this to say:
Everyone on Earth has an interest in reducing the likelihood of global catastrophe from nuclear weapons, climate change, advances in the life sciences, disruptive technologies, and the widespread corruption of the world’s information ecosystem. These threats, singularly and as they interact, are of such a character and magnitude that no one nation or leader can bring them under control. That is the task of leaders and nations working together in the shared belief that common threats demand common action.
Numerous other organizations echoed this critical message, with former Colombian President Juan Manuel Santos advocating on behalf of The Elders for world leaders to prioritize “long-view leadership”:
A Critical Moment for Action on Deepfakes
Deepfakes have been dominating news headlines - with much more deepfake-related disruption expected to follow as generative AI technology continues to improve.
Two high-profile stories even broke in the same week: first, robocalls with AI-powered fake audio of President Joe Biden targeted New Hampshire voters, encouraging them not to vote in their state primary elections. A few days later, explicit deepfake images of Taylor Swift swamped social media, being shared so widely that X temporarily blocked searches for her name.
These two examples highlight some of the key areas of concern related to the proliferation of image- and audio-generating programs allowing users to easily make such harmful material.
With nearly half of the world’s population being invited to the polls this year, AI-powered mis/disinformation - like with the Biden robocalls - is expected to disrupt elections around the world, posing a grave threat to democratic institutions and societal stability. In their recent Global Risks Report, the World Economic Forum even identified AI-powered mis/disinformation as the “most severe global risk of the next two years”.
The fake images of Taylor Swift expose another, even more pervasive harm of the technology: the creation and dissemination of nonconsensual explicit images. 96% of deepfakes online are sexual in nature, with 99% portraying women - almost entirely without their consent, or awareness. This doesn’t just apply to women in the public eye; with a single image, anyone can be the victim of a nonconsensual deepfake.
With these stories in mind, we stress the need to meaningfully address deepfakes at every level of their supply chain: banning the creation and dissemination of nonconsensual deepfakes, and holding the developers and deployers behind image-generating programs liable for harms.
Updates from FLI
We’re hiring a representative for the French AI Safety Summit! Apply for this impactful, full-time remote (France-based) job by February 16.
At Davos, FLI President Max Tegmark spoke to Forbes about why almost all deepfakes should be illegal.
Also at Davos, Max spoke on a panel alongside, among others, Meta’s Yann LeCun.
Emilia Javorsky, Director of our Futures program, contributed to this paper about responsible transformation with generative AI.
The first proof-of-concept from our partnership with Mithril Security on hardware-backed AI governance was discussed in this WIRED article.
Regarding the U.S. Department of Defense’s update of its AI rules, FLI’s Anna Hehir and Mark Brakel were quoted in the Daily Mail about the risks associated with developing and deploying autonomous weapons systems.
FLI’s Executive Director Anthony Aguirre spoke to WIRED about the U.S. government soon requiring AI companies to inform them if training powerful AI models.
Director of U.S. Policy Landon Klein spoke to The Hill about the need for U.S. lawmakers to continue the progress they’ve made on AI regulation.
On the FLI podcast, host Gus Docker spoke to nuclear security expert Carl Robichaud about nuclear risk. In another episode, guest host Nathan Labenz of The Cognitive Revolution podcast interviewed AI entrepreneur Flo Crivello about the idea of AI as a new life form.
What We’re Reading
Top risks in 2024: Eurasia Group has identified “ungoverned AI” as one of 10 top political risks the world faces in 2024.
2,700+ AI researchers weigh in: AI Impacts have released the results of the largest survey of AI researchers to date, finding that a majority of researchers have both high hopes, and dire concerns, about AI. Be sure to read the fascinating results in full.
Critical co-operation: As the Financial Times reports, the U.S. is signaling that they will work with China on AI safety - a much-needed development considering the highly global nature of AI risks.
UN report: The UN High-level Advisory Body on AI, of which FLI co-founder Jaan Tallinn is a member, has released its interim report, outlining guiding principles and institutional functions necessary to govern AI for humanity.
Meta’s “irresponsible” pursuit: In The Guardian, experts weigh in on Meta’s aim to create open source AGI.
New Research: “Sleeper Agent” LLMs
“False impression of safety”: New research indicates that LLMs can be trained to deceive - even learning to conceal this during training and evaluation. Additionally, standard safety techniques employed in this research were not sufficient to fully remove deception after training the models on such behaviours.
Why this matters: The idea of an LLM being able to present itself dishonestly is terrifying - and if such deception can be hidden in more advanced AI systems, the consequences could be disastrous. These findings further expose the many vulnerabilities (including those not yet found) of such systems, pointing to the need for more robust AI safety training and research, alongside careful consideration of advanced AI development.
Hindsight is 20/20
"Rocketing into the Northern Lights" by NASA Earth Observatory is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.
On January 25, 1995, Russian radar picked up on a fast-moving airborne object near their northern border. Appearing concerningly similar to an American nuclear missile, top Russian defense officials and President Boris Yeltsin were alerted, activating the nuclear briefcase and initiating a 10-minute countdown until they were supposed to have assessed the threat and decided whether to retaliate.
Russian commanders were told to stand by for launch instructions, bringing the world terrifyingly close to a nuclear exchange for several minutes. Thankfully, with the 10-minute decision deadline approaching, the object believed to be a U.S. missile fell into the sea, and Russian leadership decided it was not in fact a threat.
Indeed, the unidentified object was a rocket launched from Norway to study the Northern lights. Despite the Norwegian government informing Russia of this weeks before, the radar team had not been notified.
Yet another nuclear close call, the Norwegian rocket incident highlights the extreme risk humanity faces from the mere existence of nuclear weapons. What could have happened if the rocket had disappeared into the sea only a few minutes later?
As we explore every month, this near-disaster is just one of many similar examples from which we must learn - and with particular relevance to our advocacy against integrating AI into nuclear command, control, and communications.