- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: Meet PERCEY
Future of Life Institute Newsletter: Meet PERCEY
Introducing our new AI awareness companion; notes from the AI Action Summit and IASEAI; a new short film on AI replacing human labour; and more!

Welcome to the Future of Life Institute newsletter! Every month, we bring 44,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is an eight-minute read. Some of what we cover this month:
š¬ Meet PERCEY, your AI awareness companion
š«š· Reflections from Paris
šļø A new short film on human work on AIās threat to human labour
šØ Superintelligence Imagined: āThe Great Planā
And more.
If you have any feedback or questions, please feel free to send them to [email protected].
Introducing PERCEY: Your AI Awareness Companion
Today, weāre thrilled to launch 'PERCEY Made Me': an innovative AI awareness campaign, with an interactive web app at its centre. Itās an AI-based chatbot built to engage people and spread awareness of AIās current abilities to persuade and influence people, in just a few minutes.
Voiced by the legendary Stephen Fry, PERCEY is your personal guide to navigating the rapidly evolving world of artificial intelligence. With AI threatening to reshape our lives at lightning speed, PERCEY offers a unique, approachable way to:
Assess your personal AI risk awareness
Challenge and explore your assumptions about AI and AGI
Gain insights into AI's potential impact on your future
Whether you're a tech enthusiast, cautious observer, or simply curious about the AI landscape, PERCEY provides a refreshing, humour-infused approach to help counter the reckless narratives Big Tech companies are pushing.
Chat with PERCEY now, and please share widely! You can find PERCEY on X, BlueSky, and Instagram at @PERCEYMadeMe.
Reflections from Paris
"The development of highly capable AI is likely to be the biggest event in human history. The world must act decisively to ensure it is not the last event in human history. This conference, and the cooperative spirit of the AI Summit series, give me hope; but we must turn hope into action, soon, if there is to be a future we would want our children to live in."
Paris was buzzing with much of the AI world last month. The inaugural International Association for Safe & Ethical AI Conference and the French AI Action Summit, among other events, brought together AI experts from academia, industry, government, and civil society.
The IASEAI Conference from 6-7 February convened the largest-yet global gathering focusing on cutting-edge developments in AI safety and ethics, with experts in attendance calling for urgent global action on AI safety - even producing a tangible 10-point call to action for lawmakers, academics, and the public.
The subsequent AI Action Summit from 10-11 February gathered 100+ countries to discuss AI, but ultimately prioritized national investments over meaningful global governance. The US and UKās refusal to sign the Summitās resulting declaration was another miss at the opportunity presented for meaningful cooperation.
Be sure to read FLIās AI Summit Lead Ima Belloās full read-out of the AI Action Summit here for thorough coverage, key political context, analyses, and more.
Spotlight onā¦
Weāre excited to present another winning entry from our Superintelligence Imagined creative contest!
This month, weāre featuring Karl von Wendtās multi-media project, The Great Plan.
As Karl described it, āāThe Great Planā is a story about our inability to comprehend the decisions of a superintelligent AI, aimed at a general audience. When the President announces the Great Plan in a speech, the audience is ecstatic. But some doubts remain among those who are tasked with putting it into reality: what are all those new data centers needed for, and why is no one allowed to ask questions about it?ā
Watch it below, and take a look at the other winning and runner-up entries here!
We also want to highlight filmmaker Dagan Shaniās new short film, āOBSOLETE: Will AI take your job?ā.
With tech company CEOs explicitly referring to the ability for their AI to mass replace human workers, Daganās film, exploring implications of AI on the economy and the job market through expert interviews, couldnāt be more timely:
Updates from FLI
Ahead of the AI Action Summit, we published a Policymakersā Guide to AI, proposing safety standards to deliver controllable and beneficial AI tools.
Together with the Strategic Foresight Group, we published a policy brief on AI in the nuclear domain, exploring key convergence risks and policy solutions to enhance nuclear security.
In January at Davos, FLI President Max Tegmark hosted a panel on international collaboration on safe AI development, with Demis Hassabis, Yoshua Bengio, Dawn Song, and Ya-Qin Zhang. The panel talk has now been published; you can watch it here:
Max was also quoted in this Guardian article on the outcomes from Paris, elaborating on how āitās not todayās AI we need to worry about, itās next yearās.ā
FLI's AI & National Security Lead Hamza Chaudhry joined Marketplace to discuss the risky "accelerationist discourse" some have bought into regarding AI.
FLIās Head of EU Policy & Research Risto Uuk joined the Age of AI podcast to discuss common debates around AI risk.
FLI's Director of Policy Mark Brakel appeared on BBC World to discuss the AI Action Summit, and its "missed opportunity":
FLI's Director of Policy @MarkBrakel joined @BBCWorld to discuss the AI Action Summit upon its conclusion yesterday, and how a "missed opportunity" resulted in the US and UK refusing to sign the Summit's declaration:
ā Future of Life Institute (@FLI_org)
8:52 PM ā¢ Feb 12, 2025
FLIās Anna Hehir and Maggie Munro published the latest edition of The Autonomous Weapons Newsletter, with the latest on UN talks about autonomous weapons systems, and more.
On the FLI podcast, Wise Ancestorsā Ann Pace joined host Gus Docker to discuss Wise Ancestorsā work, biobanking for global resilience, and more.
Also on the podcast, Palisade AIās Jeffrey Ladish joined to discuss rapid AI progress and loss of control, why AIs misbehave (including cheating at chess!), and more.
What Weāre Reading
An AI safety researcherās āworst nightmareā: AI safety researcher Joshua Clymer wrote a fictional account describing in detail how AI could take over in as few as two years - especially frightening as AI companies continue their unchecked race to build more and more powerful AI.
AIs hacking chess: New research from Palisade AI, as Jeffrey Ladish discussed on the FLI podcast, finds that new AI models will sometimes try to cheat by hacking when sensing impending defeat in a chess match. Read the full paper here, and TIMEās coverage.
What Weāre Watching: SciShow released a video summarizing some risks of advancing AI - clearing up misconceptions about AI consciousness, how we can mitigate AI risks, and more.