Future of Life Institute Newsletter: New $4 million grants program!

Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.

Welcome to the Future of Life Institute newsletter. Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a seven-minute read. Some of what we cover this month:

  • šŸ’° $4 million grants program addressing AI-driven power concentration

  • šŸ“½ļø New video with our partners at Pindex

  • šŸ’” The recipients of our problem-solving AI grants

  • šŸ’¼ Weā€™re hiring!

  • šŸ” Yoshua Bengio reasons through arguments against AI safety

šŸ‡ŖšŸ‡ŗ Also, the EU AI Act is now *officially* final, and enters into force today! Check out our new implementation timeline for a list of implementation milestones through 2031.

Our New $4 Million Grants Program

With ungoverned AI development currently on track to concentrate power among a small number of groups, individuals, organizations, and corporations, weā€™ve launched a new grants program of up to $4 million to support projects working to mitigate this, and move us towards a better world of meaningful human agency.

Power concentration here could refer to the ā€œownership of a decisive proportion of the worldā€™s financial, labour, or material resources, or at least the ability to exploit them. It could be control of public attention, media narratives, or the algorithms that decide what information we receive.ā€

Applications are being accepted on a rolling basis. The first round of review for projects began this week; the second will commence 15 September. Find out more, and apply, here.

Pindex x FLI Collaboration

With our partners at Pindex, we helped to produce a popular new video with iconic actor Stephen Fry outlining the startling risks from the continued development of - and investment into - evermore powerful AI systems.

Watch and share it below:

Problem-Solving AI Grant Recipients

ā€œBig Tech companies are investing unprecedented sums of money into making AI systems more powerful rather than solving societyā€™s most pressing problems. AIā€™s incredible benefits ā€“ from healthcare, to education, to clean energy ā€“ could largely already  be realized by developing systems to address specific issues. AI should be used to empower people everywhere, not further concentrate power within a handful of billionaires.ā€

Dr. Emilia Javorsky, Director of FLIā€™s Futures Program

Weā€™re excited to announce the recipients of our two recent Futures program grant tracks!

In February, we released two new Requests for Proposals to support research on how AI can be safely harnessed to solve specific, intractable problems faced by people around the world. The first focused on how AI may impact the UN Poverty, Health, Energy and Climate Sustainable Development Goals (SDGs). The second called for design proposals for global institutions governing advanced AI, or artificial general intelligence (AGI). We received an incredible 130 entries, with entrants from 39 countries.

As a result, $240,000 in total has been directed to 16 grantees, receiving $15,000 each. Weā€™re excited to see what these projects produce around the world, on a wide variety of topics - from the effects of AI on maternal mortality, climate change education, labour markets, poverty, and more, to institution design proposals such as ā€œCERN for AIā€, Fair Trade AI, and a Global AGI agency.

Read more about all of the AI and UN SDG granteesā€™ projects here, and all of the institution design granteesā€™ projects here.

Weā€™re Hiring!

Looking for a high-impact job, working alongside a fantastic team? Come work with us!

Weā€™re hiring for four roles across FLI:

  1. U.S. Communications Manager (Permanent, Full-Time)

  2. Operations Associate (Permanent, Full-Time)

  3. Video Producer (Contract, Part-Time)

  4. Content Writer and Researcher (Contract, Part-Time)

All roles support remote work. Apply by 4 August at the links above!

Updates from FLI

  • FLIā€™s U.S. Policy Specialist Hamza Chaudhry wrote an op-ed in TIME, about the under-discussed risks of the ā€œnear-complete absence of [AI safety and capabilities] testing in non-English languages.ā€

  • Hamza was quoted in another TIME article on Metaā€™s recent release of its new open-source LLM:

ā

ā€œComing from the Global South, I am acutely aware that AI-powered cyberattacks, disinformation campaigns and other harms pose a much greater danger to countries with nascent institutions and severe resource constraints, far away from Silicon Valley.ā€

Hamza Chaudhry in TIME
  • Dr. Emilia Javorsky, Director of FLIā€™s Futures Program, had an essay published in Noema explaining why we need an ā€œFDA for AIā€.

  • Executive Director Anthony Aguirre released a statement calling for stronger whistleblower protections and AI regulation in the U.S., in light of recent allegations from OpenAI employees.

  • Anthony was also cited in this New York Times article on AI ā€œslopā€.

  • FLIā€™s AI Safety Summit Lead Ima Bello hosted AI pioneer Stuart Russell for the launch of the Paris AI Safety Breakfast series:

  • FLIā€™s EU Research Lead Risto Uuk spoke to Positive.News about increased cyber risks from AI.

  • Our Superintelligence Imagined contest, offering five $10,000 prizes for the best creative materials on risks from superintelligence, was covered by Graphic Competitions, InfoDesigners, FAD Magazine, and ArtInfo.

  • Semafor covered our new AI-driven power concentration grants program in their tech newsletter.

  • The Foresight Institute published a blog post outlining the three winning AI governance institution proposals from the Existential Hope Hackathon we co-hosted in February.

  • We were honoured to have Mary Robinson, former President of Ireland and Chair of The Elders (with whom weā€™re calling for long-view leadership addressing existential threats) on the FLI podcast. She and host Gus Docker discussed how to overcome barriers to international cooperation, her advice to the next generation of world leaders, long-view leadership, and more.

  • FLIā€™s Dr. Emilia Javorsky also joined the podcast for an episode on how AI concentrates power, how we might mitigate that, and what utopia(s) could look like.

What Weā€™re Reading

  • Working through anti-AI safety arguments: AI expert and Turing Award winner Yoshua Bengio published an extensive blog post, reasoning through 12 of the most popular arguments against taking AI safety seriously. Read his full post, and check out our summary of it below:

  • ā€œFacebookā€™s anti-regulatory attack dogā€: Transformer Newsā€™ Shakeel Hashim highlights Metaā€™s expensive advertising efforts - via a ā€œdark money groupā€ - against AI regulation, seemingly appealing to fears about U.S.-China competition.

  • A race the public doesnā€™t want: Speaking of U.S.-China competition, new polling finds that 75% of Americans donā€™t buy into the argument that U.S. AI companies should race ahead building powerful systems, without regulations, in order to compete with China - a narrative that many of the companies themselves are evidently trying to push.