- Future of Life Institute Newsletter
- Posts
- Future of Life Institute Newsletter: 2024 in Review
Future of Life Institute Newsletter: 2024 in Review
Reflections on another massive year; major AI companies score disappointing safety grades; our 2024 Future of Life Award winners; and more!
Welcome to the final Future of Life Institute newsletter of 2024! Every month, we bring 43,000+ subscribers the latest news on how emerging technologies are transforming our world.
If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?
Today's newsletter is a 13-minute read. Some of what we cover this month:
šļø Looking back at another big year!
šļø Celebrating our 2024 Future of Life Award winners
š Grading AI companiesā safety practices (spoiler alert: theyāre not great)
š¤ Our new $5 million multistakeholder engagement RFP
And much more.
If you have any feedback or questions, please feel free to send them to [email protected]. Happy New Year!
2024 Wrapped š
As we come to the end of another rollercoaster year, weāre reflecting on a few of FLIās most memorable moments. In no particular order, presenting our 2024 āgreatest hitsā:
EU AI Act made final. After years of work, the worldās first comprehensive AI legislation entered into force in August. With implementation taking place over the next few years, the Act will serve to support European AI innovation - especially by smaller companies and start-ups - whilst ensuring public safety as the top priority. Our AI Act Explorer tool was built to help navigate the Act; also be sure to check out the EU AI Act Newsletter!
Superintelligence Imagined. Our Superintelligence Imagined creative contest received over 180 submissions, culminating with six winners (including one grand prize winner) and seven honorable mentions illustrating the risks of superintelligence in approachable, creative ways. Check them out here!
Major support for SB 1047. Californiaās proposed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, gained incredible levels of bipartisan support - ranging from widespread public favor, to countless AI experts, labor unions, high-profile creatives, and more. The undeniable momentum generated by its supporters ensure itās only a matter of time until a similar legislative effort succeeds.
The Elders partnership on existential threats. Starting with our joint open letter, calling on world leaders to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI, and more recently our video series, we were delighted to partner with The Elders to work on combatting these grave threats.
WebSummit. In the speech below from WebSummit, which brought over 70,000 participants from around the world to Lisbon, FLI President Max Tegmark advocated for the development of helpful Tool AI instead of risky AGI. FLIās Futures program director Emilia Javorsky also joined a panel discussing if ethical AI is possible, and presented a talk on pathways to positive futures.
Religious engagement. Our initiative to engage with and support religious groupsā perspectives on AI ramped up this year, with numerous events around the world and blog posts from different religious leaders.
Campaign to Ban Deepfakes. Weāre proud of the diverse, bipartisan coalition of organizations and individuals behind the Campaign to Ban Deepfakes, which - alongside organizations such as Control AI, Encode Justice, the National Organization for Women, and more - we brought together in 2024 to push for liability across the entire deepfake āsupply chainā.
Increasing state engagement on AWS. Among several other historic milestones this year towards restricting autonomous weapons systems, the first-ever global conference on autonomous weapons systems took place in Vienna, with an incredible 144 states in attendance. Another highlight was the first Economic Community of West African States (ECOWAS) conference on the topic in Freetown, Sierra Leone.
New grant opportunities. This year, we launched a number of exciting grant opportunities. From grants for problem-solving AI, to research into how AI may worsen power concentration, and our new multistakeholder engagement RFP, weāre directing millions of dollars to a variety of important projects intended to steer AI towards benefitting humanity.
U.S. policy engagement. FLIās U.S. policy team provided input and recommendations at both the federal and state level this year, publishing numerous reports available to read here. We look forward to continuing our engagement with the incoming administration as well!
Celebrating our 2024 Future of Life Award Winners!
Every year, we present the Future of Life Award to unsung heroes whose contributions have helped make our world today significantly better than it could have been.
This year, we were delighted to honour Batya Friedman, James H. Moor, and Steve Omohundro with the award!
Learn more below about these three groundbreaking experts, selected this year for their work laying the foundations for ethics and safety in computing and AI:
Batya Friedman is a leading professor in human-computer interactions at the University of Washington. She founded the field of value-sensitive design, which promotes the integration of human values into technology development. Friedmanās methodology influenced how ethical considerations are embedded in digital systems across multiple scientific disciplines.
James H. Moor, honoured posthumously, was a prominent professor at Dartmouth. He defined the field of computer ethics through his influential article "What Is Computer Ethics?". It was his work that introduced key concepts into the discourse such as the "logical malleability" of computers and the potential "policy vacuums" this technology might therefore create.
Steve Omohundro is a pioneering AI researcher and scientist. His work was foundational to understanding the safety considerations of artificial intelligence. Omohundro was an early advocate for aligning AI with human values while raising awareness about the ethical implications of the technology.
Out now: AI Safety Scorecards!
We recently released our 2024 AI Safety Index! We convened an independent panel of leading AI experts to evaluate the safety practices of six major AI companies: OpenAI, Anthropic, Meta, Google DeepMind, xAI, and Zhipu AI.
Covered by TIME, CNBC, IEEE Spectrum, and more, the companiesā grades were, as you may expect, not great. As panelist and AI expert Stuart Russell shared, ānone of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes trained on unimaginably vast quantities of data.ā
Among the highlights, which you can read in full in the complete report:
Despite commendable practices in some areas, the panel found large gaps in accountability, transparency, and preparedness to address both current and existential risks from AI.
While some companies have established initial safety frameworks or conducted some serious risk assessment efforts, others have yet to take even the most basic precautions.
Despite explicit ambitions to develop AGI, capable of rivaling or exceeding human intelligence, the review panel deemed the current strategies of all companies inadequate for ensuring that these systems remain safe and under human control.
Reviewers consistently highlighted how companies were unable to resist profit-driven incentives to cut corners on safety in the absence of independent oversight.
$5 Million Multistakeholder Engagement RFP
Weāve launched a new Request for Proposals!
With our new Multistakeholder Engagement for Safe and Prosperous AI grant program, weāre offering up to $5 million, likely between $100K-$500K for each project, to support work educating and engaging specific stakeholder groups on AI issues, or directly delivering grassroots outreach and organizing with the public.
Submit your brief letter of intent by February 4th, and please share widely.
Spotlight onā¦
Weāre excited to share another winning entry from our Superintelligence Imagined creative contest!
This month, weāre delighted to present YouTuber Dr Wakuās video, āHow AI threatens humanity, with Yoshua Bengioā. In this 30-minute video, Dr Waku presents āa special deep dive into superintelligence, meant for the general public.ā He also notes, āI interviewed Yoshua Bengio to discuss the risks posed by advanced AI, including misuse by humanity, misalignment, and loss of control. To address these issues, we need significant technical change as well as political change, which means the public needs to get informed about and involved in this issue.ā
Watch it below, and take a look at the other winning and runner-up entries here.
Updates from FLI
Weāre reviewing applications for our Head of U.S. Policy role on a rolling basis, but we are accepting applications until January 19. Apply now, and please share!
FLIās AI Action Summit lead Ima Bello published our recommendations for the French AI Action Summit, which will take place 10-11 February 2025. The recommendations focus on three core areas: understanding AI risks (science), fostering global cooperation (solutions), and establishing robust international standards (standards).
Ima also published the fifth AI Action Summit Newsletter, available here.
Guest contributor Sarah Hastings-Woodhouse wrote an informative blog post on the FLI site, covering everything you need to know (and why you should care) about AI agents.
Sarah also wrote a blog post on uncontrollable AI - and whether or not we could shut off a dangerous AI system.
Weāre pleased to see Austrian Foreign Minister Alexander Schallenberg nominated for 2024 Arms Control Person of the Year, for his work raising awareness about autonomous weapons systems (including the Vienna Conference). We encourage you to vote for him here, before January 13.
Final reminder: Applications for our postdoctoral fellowships on AI existential safety research are still open! Apply by January 6, 2025 at 11:59 pm ET.
FLI President Max Tegmark joined for an episode of Dr. Brian Keatingās Into the Impossible Podcast, discussing both the potential and dangers of AI:
Max also spoke to the Financial Times about the urgent need for meaningful legislation against deepfakes, especially given bipartisan agreement on the issue.
FLI Policy Researcher Alexandra Tsalidis joined for a segment on Al Jazeeraās The Stream, discussing possible threats AI presents to the environment and human life, along with potential solutions:
"We don't have to choose between greater innovation and safety."
šŗ Alexandra Tsalidis, Policy Researcher at FLI, joined @joepmeindertsma & @LeylaAcaroglu for an @AJStream segment on the potential threats AI presents to the environment & human life - and what we can do about it:
ā Future of Life Institute (@FLI_org)
9:34 PM ā¢ Dec 17, 2024
On the FLI podcast, Nathan Labenz, host of The Cognitive Revolution Podcast, joined host Gus Docker for a comprehensive look at AI progress from GPT-4 until now.
Also on the podcast, GiveDirectly President and CEO Nick Allardice and Gus discussed how GiveDirectly uses AI to optimize cash transfers and even to predict natural disasters.
What Weāre Reading
Alignment faking!?: New research from Anthropic and Redwood Research has found the first empirical example of an LLM faking alignment, without any training or instruction to do so. According to the reportās authors, "A model might behave as though its preferences have been changed by the training - but might have been faking alignment all along, with its initial, contradictory preferences 'locked in'.ā Creepy.
Obfuscated activations: A new paper from, among others, FLIās AI safety research PhD fellows Stephen Casper, Erik Jenner, and Luke Bailey, investigates a weakness in LLM defenses against attacks - specifically, how harmful behaviour can be hidden within an LLM while evading monitors designed to detect such attacks.
What Weāre Watching: Digital Engine has released a new YouTube video, with AI experts sharing their opinions on AIās existential risks - including in the context of OpenAIās o3 model: