- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: The AI 'Shadow of Evil'
Future of Life Institute Newsletter: The AI 'Shadow of Evil'
Notes from the Vatican on AI; the first International AI Safety Report; DeepSeek disrupts; a youth-focused video essay on superintelligence, by youth; grant and job opportunities; and more!
![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/42b4e038-3fe4-44e2-8440-06b4b14e3eba/FLI_Banner-03.png?t=1679990153)
Welcome to the first Future of Life Institute newsletter of 2025! Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a nine-minute read. Some of what we cover this month:
📰 What is DeepSeek, and what does it mean for the AI industry?
🇻🇦 The Vatican’s new document on AI
🕛 2025 Doomsday Clock update
📺 A video essay on superintelligence made for youth, by youth
💵 Final call for our multistakeholder engagement RFP
And much more.
If you have any feedback or questions, please feel free to send them to [email protected]. Happy New Year!
DeepSeek Disruption
In case you somehow missed it, Chinese AI startup DeepSeek has released a new AI model, offering what appears to be a cheaper, but similarly capable, alternative to OpenAI’s ChatGPT. Following its release on January 20th, DeepSeek has already reached the top of the Apple Store app downloads - and shaken up the U.S. AI industry. On the stock market, Nvidia broke records with the single biggest one-day drop in U.S. history, losing $600 billion in value last Monday.
So what’s all the fuss about? Well, DeepSeek R1 is reportedly about as powerful as OpenAI’s recent o1 model, but trained for only a fraction of the $100+ million budget that went into o1’s training. The developers behind it also seem to have built it around the U.S. ban on exporting advanced Nvidia chips to China, in place since 2022. These developments call into question the U.S.’ and U.S. companies’ ability to prevent other countries and actors from catching up to its AI development - and the point of the AI ‘arms race’ we seem to be in, to humanity’s detriment. Rumours abound that the model may have been trained using outputs from OpenAI models, but nothing has been confirmed.
While much media coverage has focused on this new development as part of a larger U.S.-China AI race, we urgently must move away from this framing. If the U.S. and China continue to compete against each other on AI development without adequate safety protocols, the incentive to cut corners on alignment, oversight, and other safety and ethics considerations will only increase.
Instead of escalating competition, both nations should prioritize risk mitigation efforts - independently and, in an ideal world, collaboratively. Open dialogue between AI labs, researchers, and policymakers across geopolitical divides is crucial to ensuring that AI development remains safe and beneficial for all.
Antiqua et Nova
On January 28, 2025, the Vatican released a comprehensive document titled "Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence," addressing AI’s potential implications and risks. This 30-page note emphasizes that AI should serve as a tool to complement human intelligence rather than replace it, underscoring the unique qualities inherent to humans.
The Vatican also raises concerns about AI's role in warfare, particularly the ethical implications of autonomous weapons systems that operate without human oversight, warning of the potential for a destabilizing arms race with catastrophic consequences for all.
One of the more notable quotes refers to the “shadow of evil” that, as they suggest, looms over AI: “Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”
Final Call: Multistakeholder Engagement RFP
A reminder that applications for our Multistakeholder Engagement for Safe and Prosperous AI grant program close February 4th!
We’re offering up to $5 million, likely between $100K-$500K for each project, to support work educating and engaging specific stakeholder groups on AI issues, or directly delivering grassroots outreach and organizing with the public.
Submit your brief letter of intent here.
Spotlight on…
We’ve got another winning entry from our Superintelligence Imagined creative contest for you this month!
We’re delighted to feature Young Asian Scientists’ video essay, “Superintelligence: The End of Humanity or Our Golden Age?”. As they describe it, this video is “an informative and accessible exploration of what Superintelligence is, how it might be reached, and its possible consequences” developed to help inform a younger audience.
Watch it below, and check out the other winning and runner-up entries here!
Updates from FLI
Enthusiastic about improving humanity’s future, and experienced in building and managing WordPress websites? We’re hiring a part-time Website Developer & Editor! Proficiency in HTML, CSS (especially frameworks like TailwindCSS), JavaScript, PHP, and familiarity with design tools such as Figma are essential. Apply by March 1.
Although not an FLI update, we’re delighted to share the Tarbell Center’s Fellowships program! Aimed at early-career journalists interested in covering AI and its effects, these fellowships offer a fully-funded nine-month long placement in a major newsroom. Apply by February 28!
While also not explicitly an FLI update, EU Research Lead Risto Uuk is helping to organize the International Conference on Large-Scale AI Risks, taking place at KU Leuven May 26-28. Participants are invited to submit work relating to large-scale AI risks - if interested, submit an abstract to [email protected] by February 15.
FLI President Max Tegmark presented several talks at Davos last week on risks from developing AGI, and how AI can augment, rather than replace, humanity:
"It can either be the best thing to ever happen to humanity, or the worst."
"We don't have time to wait for a big disaster before we treat AI like every other industry."
📺 @tegmark spoke with @axios' @inafried about how we can build a positive, human-empowering future with AI:
— Future of Life Institute (@FLI_org)
10:42 PM • Jan 21, 2025
Also at Davos, Max joined ‘Godfather of AI’ Yoshua Bengio on CNBC for an extensive interview about AGI, as tech companies continue their push to develop it:
"We can have almost everything that we're excited about with AI... if we simply insist on having some basic safety standards."
📺 FLI President @tegmark and 'godfather of AI' @Yoshua_Bengio joined @CNBC's @ArjunKharpal for a discussion of all things AGI and ASI.
Watch below:
— Future of Life Institute (@FLI_org)
11:06 PM • Jan 24, 2025
On the FLI podcast, ARIA’s David “davidad” Dalrymple joined host Gus Docker for an episode about his work on ‘Safeguarded AI’, a new approach for guaranteeing safety in highly advanced AI systems.
Also on the podcast, Fr. Michael Baggot joined to discuss Catholic perspectives on transhumanism and superintelligence, and how religious communities approach advanced AI.
What We’re Reading
International AI Safety Report: 100 independent AI experts from around the world have released the first-ever International AI Safety Report. Backed by 30 countries and the OECD, UN, and EU, the Report aims to provide policy-makers with an evidence-based summary of AI capabilities and risks, and how to mitigate those risks. It will likely be presented at the French AI Action Summit, taking place February 10-11. Read it now here!
What we’re doing: We look forward to participating in and watching highlights from the inaugural International Association for Safe & Ethical AI Conference, in Paris February 6-7, right before the AI Action Summit. This event will gather experts from academia, industry, government, and civil society to discuss advancements in AI safety and ethics. Keep an eye out for updates on LinkedIn and X.
2025 Doomsday Clock: The Bulletin of the Atomic Scientists has updated their Doomsday Clock for 2025, moving it one second closer: 89 seconds to midnight, the closest it has ever been. This change reflects growing risks from nuclear threats, climate change, biological dangers, and disruptive technologies such as AI, along with global leaders’ failure to address them. Even a one-second shift signals extreme danger - delaying action further increases the risk of catastrophe.