- Future of Life Institute Newsletter
- Posts
- AI at the Vatican
AI at the Vatican
Plus: Fellowship applications open; global call for AI red lines; new polling finds 90% support for AI rules; register for our $100K creative contest; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a seven-minute read. Some of what we cover this month:
🧑🎓 Apply to our 2026 fellowships
🚧 Global Call for AI Red Lines
⛪ AI at the Vatican
🇺🇸 Americans want AI rules, 9-to-1
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
The Big Three
Key updates this month to help you stay informed, connected, and ready to take action.
→ Global Call for AI Red Lines: AI could deliver huge benefits to humanity - but without guardrails, we risk a future of escalating AI-driven harms. That’s why we’ve joined more than 70 organizations and 200+ experts in the Global Call for AI Red Lines, urging governments to agree on clear limits for AI by the end of 2026. These red lines are essential to prevent the most severe risks to humanity and global stability, before it’s too late. Help spread the word by signing and sharing the call:
→ AI talks take the Vatican: As part of the World Meeting on Human Fraternity, FLI President Max Tegmark and Futures Program Associate William Jones were honoured to join the Vatican’s groundbreaking gathering on AI. Max and numerous others, including artist Will.i.am; Nobel Prize Laureates such as Maria Ressa; and “Godfather of AI” Yoshua Bengio, signed a “Global Appeal on Human Fraternity in the Age of AI” to present to Pope Leo. The appeal calls for leadership to uphold several principles and red lines, including keeping AI “a tool, not an authority”, and protecting “human life and dignity”.
We really need moral leadership on this issue... Why should there be no requirements you have to meet to have the rights to unleash super intelligence on the world, when even to unleash a new pasta dish in the world, you have to have someone first check that the benefits outweigh the harm?
→ The numbers are clear, again: A new Institute for Family Studies survey finds 90% of Americans want Congress to introduce strong protections against AI harms - especially to keep children safe.
Among the highlights from the report:
Americans overwhelmingly agree that tech companies should be prohibited from deploying AI chatbots that engage in sexual conversations with minors.
Respondents across all age groups, income brackets, and both parties agree that the priority of Congress should be to protect children, over working to keep states from regulating AI companies.
90% of Americans agree that families should be granted the right to sue an AI company, “if its products contributed to harms such as suicide, sexual exploitation, psychosis, or addiction in their child.”
Heads Up
Other don't-miss updates from FLI, and beyond.
→ FLI fellowships open again: We’re now accepting applications to our 2026 fellowship programs! Three impactful tracks are open:
Apply by November 21 for PhD fellowships; January 5 for postdoctoral.
→ Keep the Future Human creative contest: Reminder to register for our new Keep the Future Human creative contest, with $100,000+ in prizes available for creative digital media that brings the key ideas in Keep the Future Human to life! Submissions are due November 30.
→ AI is pulling up the career ladder: Labour research firm Revelio Labs found entry-level job postings have declined by a whopping 35% since January 2023. Such a decline is leaving entry-level workers in a lurch, and presents existential concerns about the traditional career track white-collar workers could previously depend on for career advancement. FLI’s Max Tegmark, interviewed for a CNBC article on the report, shared this perspective: "If we continue racing ahead with totally unregulated AI, we’ll first see a massive wealth and power concentration from workers to those who control the AI, and then to the machines themselves as their owners lose control over them."
→ “I would take control”: A new video from Inside AI explores if AI would ever hurt a human…through direct conversations with LLMs:
On the FLI Podcast, host Gus Docker was joined by:
→ Basil Halperin, economist, to discuss what markets tell us about AI timelines.
→ Luke Drago, co-author of “The Intelligence Curse” essay series, to cover how AI could reduce incentives for firms to invest in people.
→ Nate Soares, co-author of the new “If Anyone Builds It, Everyone Dies” book, to explain how building superintelligence would result in human extinction.
→ Beatrice Erkers, Existential Hope Program Director at the Foresight Institute, to discuss how we keep humans in control of AI.