Over 65,000 Sign to Ban the Development of Superintelligence

Plus: Final call for PhD fellowships and Creative Contest; new California AI laws; FLI is hiring; can AI truly be creative?; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 60,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is an eight-minute read. Some of what we cover this month:

  • 📝 65,000+ have signed the Superintelligence Statement. Will you join us?

  • ⚖️ California Gov. Newsom signs new AI safety legislation

  • 🧑‍🎨 Last chance to register for our Keep the Future Human Creative Contest!

  • 🎥 New videos to watch and share

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

The Big Three

Key updates this month to help you stay informed, connected, and ready to take action.

 Statement on Superintelligence: We’re thrilled to have almost 65,000 signatures on our Statement on Superintelligence, released October 22. Signatories span many backgrounds and beliefs, including:

  • Eight Nobel Laureates, along with scientists and AI experts such as Yoshua Bengio, Stuart Russell, Geoffrey Hinton, and Andrew Yao.

  • Business leaders including Steve Wozniak, Sir Richard Branson, and Andre Hoffman.

  • AI company executives such as Emad Mostaque and Peng Zhang, alongside AI researchers from OpenAI, Anthropic, Google DeepMind, and Meta.

  • Artists like Joseph Gordon-Levitt, Grimes, Kate Bush, Natasha Lyonne, and Sir Stephen Fry.

  • Political/media figures including Glenn Beck, Steve Bannon, Susan Rice, Mike Mullen, Mary Robinson, and seven former Members of Congress from both sides of the aisle.

The short-and-sweet Statement calls for “a prohibition on the development of superintelligence, not lifted before there is

  1. broad scientific consensus that it will be done safely and controllably, and

  2. strong public buy-in”.

Aside from amassing an incredible number of signatures, the Statement has had worldwide coverage, from the Associated Press, to Fox (below), CNBC, CBS, The Guardian, the Washington Post, and more.

Sign here if you wish to join the call, and please share!

 Republicans and Democrats alike want AI rules: As highlighted by the wide range of Superintelligence Statement signatories (who would’ve predicted Steve Bannon and Susan Rice signing the same open letter?), preventing harm from rapidly-advancing AI systems seems to be one of the few issues aligning the Left and Right. New U.S. polling finds that nearly 70% of all voters - including 70% of Republicans - believe it’s EXTREMELY or VERY IMPORTANT that there’s oversight of AI companies.

“A political realignment is happening right before our eyes. Poll after poll shows that Americans across the political spectrum overwhelmingly support regulation of AI companies. A bipartisan coalition is coalescing around the urgent need to rein in Big Tech’s unchecked power and ensure that AI is developed in a way that benefits our kids and our communities.”

Michael Kleinman, FLI’s Head of U.S. Policy, in the National Review

California passes state AI legislation: Over the past month, California Governor Gavin Newsom signed several pieces of legislation intended to make AI safer. Most notably, Senate Bill 53 introduces landmark, first-in-the-country transparency requirements on AI companies’ safety protocols, along with whistleblower protections, and a mechanism to report critical safety incidents.

Another bill, Senate Bill 243, requires better anti-suicide guardrails on AI chatbots, and that minor users are reminded to take breaks and that the chatbot isn’t real. It also mandates that “reasonable measures” are taken to prevent sexually explicit content when engaging with minors - especially relevant after Meta, for example, was exposed for allowing their chatbots to have inappropriate conversations with users as young as eight. Here’s hoping the rest of the country will look to California as an example for how to innovate with AI, while prioritizing public safety.

Heads Up

Other don't-miss updates from FLI, and beyond.

This might be the most important video I’ve ever made”: Our friends at Siliconversations covered the Superintelligence Statement, with this fantastic video briefly outlining why it’s so urgent - and why you should sign, too:

We’re hiring: Applications are open for our UK Policy Advocate/Lead (apply by November 19), UAE Representative (rolling applications), and Republic of Korea Representative positions (rolling applications)!

“Inside the Machine”: Mark Brakel, FLI’s Director of Policy, has an exciting new weekly video series. Every Wednesday on his LinkedIn, Mark will share his takes on the latest AI news and developments from inside the fight for safer AI. Check out his latest episode on AI and job loss here, and be sure to follow him for much more to come!

Three weeks left to apply for FLI PhD fellowships: Applications are due November 21 for our two exciting PhD fellowship tracks, on US-China AI Governance and Technical AI Existential Safety. Applications for our technical postdoc fellowships are open until January 5.

Creative contest closes in one month: Last call to register for our new Keep the Future Human creative contest, with $100,000+ in prizes available for creative digital media that brings the key ideas in Keep the Future Human to life! Submissions must be entered by November 30.

On the FLI Podcast, host Gus Docker was joined by:

Adam Gleave, co-founder and CEO of FAR.AI, to discuss post-AGI scenarios.

Parmy Olson, Bloomberg tech columnist and “Supremacy” author, to discuss how AI companies are transforming.

Maya Ackerman, AI researcher and co-founder/CEO of WaveAI, to discuss creativity in AI vs. humans, and if AI can actually be creative.