Future of Life Institute Newsletter: Meet PERCEY

Introducing our new AI awareness companion; notes from the AI Action Summit and IASEAI; a new short film on AI replacing human labour; and more!

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an eight-minute read. Some of what we cover this month:

  • 💬 Meet PERCEY, your AI awareness companion

  • 🇫🇷 Reflections from Paris

  • 🎞️ A new short film on human work on AI’s threat to human labour

  • 🎨 Superintelligence Imagined: “The Great Plan

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

Introducing PERCEY: Your AI Awareness Companion

Today, we’re thrilled to launch 'PERCEY Made Me': an innovative AI awareness campaign, with an interactive web app at its centre. It’s an AI-based chatbot built to engage people and spread awareness of AI’s current abilities to persuade and influence people, in just a few minutes.

Voiced by the legendary Stephen Fry, PERCEY is your personal guide to navigating the rapidly evolving world of artificial intelligence. With AI threatening to reshape our lives at lightning speed, PERCEY offers a unique, approachable way to:

  • Assess your personal AI risk awareness

  • Challenge and explore your assumptions about AI and AGI

  • Gain insights into AI's potential impact on your future

Whether you're a tech enthusiast, cautious observer, or simply curious about the AI landscape, PERCEY provides a refreshing, humour-infused approach to help counter the reckless narratives Big Tech companies are pushing.

Chat with PERCEY now, and please share widely! You can find PERCEY on X, BlueSky, and Instagram at @PERCEYMadeMe.

Reflections from Paris

"The development of highly capable AI is likely to be the biggest event in human history. The world must act decisively to ensure it is not the last event in human history. This conference, and the cooperative spirit of the AI Summit series, give me hope; but we must turn hope into action, soon, if there is to be a future we would want our children to live in."

Prof. Stuart Russell at IASEAI ‘25

Paris was buzzing with much of the AI world last month. The inaugural International Association for Safe & Ethical AI Conference and the French AI Action Summit, among other events, brought together AI experts from academia, industry, government, and civil society.

The IASEAI Conference from 6-7 February convened the largest-yet global gathering focusing on cutting-edge developments in AI safety and ethics, with experts in attendance calling for urgent global action on AI safety - even producing a tangible 10-point call to action for lawmakers, academics, and the public.

The subsequent AI Action Summit from 10-11 February gathered 100+ countries to discuss AI, but ultimately prioritized national investments over meaningful global governance. The US and UK’s refusal to sign the Summit’s resulting declaration was another miss at the opportunity presented for meaningful cooperation.

Be sure to read FLI’s AI Summit Lead Ima Bello’s full read-out of the AI Action Summit here for thorough coverage, key political context, analyses, and more.

Spotlight on…

We’re excited to present another winning entry from our Superintelligence Imagined creative contest!

This month, we’re featuring Karl von Wendt’s multi-media project, The Great Plan.

As Karl described it, “‘The Great Plan’ is a story about our inability to comprehend the decisions of a superintelligent AI, aimed at a general audience. When the President announces the Great Plan in a speech, the audience is ecstatic. But some doubts remain among those who are tasked with putting it into reality: what are all those new data centers needed for, and why is no one allowed to ask questions about it?”

Watch it below, and take a look at the other winning and runner-up entries here!

We also want to highlight filmmaker Dagan Shani’s new short film, “OBSOLETE: Will AI take your job?”.

With tech company CEOs explicitly referring to the ability for their AI to mass replace human workers, Dagan’s film, exploring implications of AI on the economy and the job market through expert interviews, couldn’t be more timely:

Updates from FLI

  • Ahead of the AI Action Summit, we published a Policymakers’ Guide to AI, proposing safety standards to deliver controllable and beneficial AI tools.

  • Together with the Strategic Foresight Group, we published a policy brief on AI in the nuclear domain, exploring key convergence risks and policy solutions to enhance nuclear security.

  • In January at Davos, FLI President Max Tegmark hosted a panel on international collaboration on safe AI development, with Demis Hassabis, Yoshua Bengio, Dawn Song, and Ya-Qin Zhang. The panel talk has now been published; you can watch it here:

  • Max was also quoted in this Guardian article on the outcomes from Paris, elaborating on how “it’s not today’s AI we need to worry about, it’s next year’s.”

  • FLI's AI & National Security Lead Hamza Chaudhry joined Marketplace to discuss the risky "accelerationist discourse" some have bought into regarding AI.

  • FLI’s Head of EU Policy & Research Risto Uuk joined the Age of AI podcast to discuss common debates around AI risk.

  • FLI's Director of Policy Mark Brakel appeared on BBC World to discuss the AI Action Summit, and its "missed opportunity":

  • FLI’s Anna Hehir and Maggie Munro published the latest edition of The Autonomous Weapons Newsletter, with the latest on UN talks about autonomous weapons systems, and more.

  • On the FLI podcast, Wise Ancestors’ Ann Pace joined host Gus Docker to discuss Wise Ancestors’ work, biobanking for global resilience, and more.

  • Also on the podcast, Palisade AI’s Jeffrey Ladish joined to discuss rapid AI progress and loss of control, why AIs misbehave (including cheating at chess!), and more.

What We’re Reading

  • An AI safety researcher’s “worst nightmare”: AI safety researcher Joshua Clymer wrote a fictional account describing in detail how AI could take over in as few as two years - especially frightening as AI companies continue their unchecked race to build more and more powerful AI.

  • AIs hacking chess: New research from Palisade AI, as Jeffrey Ladish discussed on the FLI podcast, finds that new AI models will sometimes try to cheat by hacking when sensing impending defeat in a chess match. Read the full paper here, and TIME’s coverage.

  • What We’re Watching: SciShow released a video summarizing some risks of advancing AI - clearing up misconceptions about AI consciousness, how we can mitigate AI risks, and more.