- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
Future of Life Institute Newsletter: FLI x The Elders, and #BanDeepfakes
Former world leaders call for action on pressing global threats; launching the campaign to #BanDeepfakes; new funding opportunities from our Futures program; and more.
Welcome to the Future of Life Institute newsletter. Every month, we bring 42,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a 10-minute read. Some of what we cover this month:
šŗļø FLI partners with The Elders to call for action on urgent threats.
š The campaign to ban deepfakes - have you signed the open letter?
š« FLIās new Autonomous Weapons Newsletter.
š° New funding and career opportunities!
FLI partners with The Elders
Weāre proud to announce our new partnership with The Elders, an international organization founded by Nelson Mandela to bring together former world leaders in pursuit of peace, justice, human rights and a sustainable planet.
As our first joint initiative, together weāve released an open letter urging world leaders to take decisive action on the ongoing harms and existential risk presented by ungoverned AI, the climate crisis, nuclear weapons, and pandemics.
The letter, widely covered in media and already signed by 2,500+ individuals including former leaders, Nobel Laureates, scientists, activists and artists, calls upon decision-makers to adopt the principles of ālong-view leadershipā.
If you havenāt already, you can add your name to the letter, and share our social media posts.
Ban Deepfakes Campaign Launch
A diverse and growing coalition of experts, activists, lawmakers across the political spectrum, and independent organizations, including FLI, SAG-AFTRA, Women Against Violence Europe, Control AI, and the Womenās Media Center, has formed to call for meaningful legislation to ban nonconsensual deepfakes.
The campaign and related open letter has been circulating widely since its release last week, with its call to action to prevent further harm resonating with many - signatories already include Steven Pinker, Andrew Yang, Marietje Schaake, Sneha Revanur, Chris Weitz, Stuart Russell, and many others.
However, we need your help to maximize the campaignās reach. Learn more about the objectives of the open letter below, and join the campaign.
New Futures Funding Opportunities
Our Futures program, launched in 2023 to guide humanity towards the beneficial outcomes made possible by transformative technologies, is offering two exciting new funding opportunities to support research into realizing a positive, empowered future with AI.
Details about the two Requests for Proposals:
Request for Proposals #1: For papers evaluating and predicting the impact of AI on the achievement of the UN Sustainable Development Goals (SDGs) relating to poverty, healthcare, energy, or climate change.
Request for Proposals #2: For designs of trustworthy global mechanisms or institutions to govern AGI in the near future.
Selected proposals will receive a $15,000 grant to support this work according to the researcher's discretion. Applications are due April 1st, 2024.
Coming soon: The Autonomous Weapons Newsletter
Weāre excited to announce that weāre launching a new newsletter: The Autonomous Weapons Newsletter.
The Autonomous Weapons Newsletter will keep subscribers up-to-date on the autonomous weapons space, with a monthly update covering policymaking efforts, weapons systems technology, and more.
With this newsletter, we aim to provide a well-informed, objective resource for an audience primarily consisting of policymakers, journalists, and diplomats.
Subscribe and spread the word, and keep an eye out for our first edition coming mid-March!
Governance and Policy Updates
The EUās new āAI Officeā will play a key role in AI Act implementation - it can compel Big Tech companies to evaluate their models, conduct adversarial testing, and mitigate systemic risk. The regulator will be able to access the worldās most powerful models, and can issue fines of up to 3% of global turnover for non-compliance. The Office is recruiting staff members for technical roles shortly. Applicants with legal and policy backgrounds will also be invited to apply soon. Subscribe to AI Office updates for more information.
Following the November AI Safety Summit, the UK Government has announced a significant new investment in AI safety, directing Ā£100 million+ of funding to support new research on responsible AI and prepare regulators to meaningfully address AI risks.
On February 2nd, the EU AI Act was unanimously endorsed by all 27 EU nations, being formally adopted by the EU's Committee of Permanent Representatives (COREPER). This is yet another important milestone as the Act approaches the final legislative stages.
The leaders of the U.S. House of Representatives are launching a 24-member bipartisan House task force to ensure safe, responsible AI innovation. FLIās Director of U.S. Policy, Landon Klein, told Semafor this was a āvery positive stepā towards regulation.
California State Senator Scott Wiener has introduced a "landmark" bill requiring tech companies to submit new AI models for safety testing before release, with great potential to influence national standards.
Updates from FLI
Along with 200+ other leading AI experts, weāve been invited to join the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium to support the development of safe & trustworthy AI.
As mentioned in Politico, weāve created a new EU AI Act Explorer tool, allowing for easy navigation through all 280 pages of the AI Act text, and searching/sharing of specific content within it.
Weāve published two new resources on AI risk: an overview of catastrophic AI scenarios, and an outline of the risk of gradual disempowerment by AI. FLIās Ben Eisenpress also was interviewed by the Daily Mail, giving an overview of AI risks.
On the FLI podcast, host Gus Docker interviewed Sneha Revanur, founder of Encode Justice, about the social effects (negative and positive) of AI, mutual interests between AI ethics and AI safety, and AI-powered misinformation. He also spoke to AI safety expert and professor Dr. Roman Yampolskiy about evidence that AI is uncontrollable, debate around designing human-like AI, and scaling laws.
Bonus: Gus was also in the interviewee seat in February! He joined the Foresight Institute Existential Hope podcast to discuss his optimism about the future and the potential of technology to empower people.
As covered in Politico Morning Tech, weāve published an article examining the possibility to hold AI model developers liable for nonconsensual sexual material made with their systems, under the provisional EU directive combating violence against women.
Director of Policy Mark Brakel spoke to IT for Business about AI regulation at the World AI Cannes Festival.
Mark also joined Francesca Rossi, Yann LeCun, and Nick Bostrom (who joined virtually) at the Festival for a panel debate on slowing down AI research.
A number of FLI staff participated in, and a few even judged, the Foresight Institute Existential Hope institution design hackathon earlier this month. We were proud to sponsor this event.
Executive Director Anthony Aguirre was interviewed by NHK World News about the global AI safety discussions that have followed November's UK AI Safety Summit.
Anthony also spoke to WIRED about the U.S. government's plan to require tech companies inform them when training powerful AI models.
What Weāre Reading
AI chatbots choose violence: An article from the New Scientist covers how, in multiple war game simulations, large language models (LLMs) tend to choose violence and nuclear strikes.
What weāre watching: A new educational video from Kurzgesagt explores what the aftermath of nuclear war would look like, including the devastating impact of nuclear winter:
What weāre listening to: Inspired by our own U.S.-Russia nuclear exchange simulation released last summer, award-winning musician Kluane Takhini has released a musical arrangement accompanying the original video.
An emerging means of governance: A new report presents computing power as a promising tool for AI governance, allowing governments to āTrack or monitor compute to gain visibility into AI development and use; Subsidize or limit access to compute to shape the allocation of resources across AI projects; [and] Monitor activity, limit access, or build āguardrailsā into hardware to enforce rulesā.
Why this matters: As governments around the world examine how to regulate rapidly-evolving AI technology, computing power could serve an important role given, as the reportās authors note, its detectability, excludability, quantifiability, and supply chain concentration. However, this node of governance also requires careful consideration of its risks - i.e., the risks of it being āused to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimesā.