DoW vs. Anthropic

Including: Anthropic drama; our new Protect What's Human campaign; war game simulations show AI defaults to terrifying outcomes; and more.

Welcome to the Future of Life Institute newsletter! Every month, we bring 70,000+ subscribers the latest news on how emerging technologies are transforming our world.

If you've found this newsletter helpful, why not tell your friends, family, and colleagues to subscribe?

Today's newsletter is a nine-minute read. Some of what we cover this month:

  • 🗯️ Anthropic vs. Department of War

    • And, Anthropic vs… their own safety pledge?

  • 😨 AI defaults to nukes almost every time

  • ☎️ Our new campaign to Protect What’s Human

  • 📜 New Utah AI safety bill

And more.

If you have any feedback or questions, please feel free to send them to [email protected].

The Big Three

Key updates this month to help you stay informed, connected, and ready to take action.

 Anthropic vs. DoW: The U.S. Department of War gave AI company Anthropic an ultimatum last week: allow the military unrestricted access to their AI, or lose a $200M contract with the Pentagon and be blacklisted from all government work. Anthropic stood firm that AI shouldn't control weapons or be used for mass surveillance of Americans, which the Pentagon wouldn't concede to.

Vocal support for Anthropic's commitment to their (bare minimum) redlines emerged from across the U.S., including from lawmakers across the political spectrum and employees at rivals such as OpenAI and Google.

After the Friday deadline imposed by Secretary of War Pete Hegseth came and went, Anthropic’s work with the government was suspended and they were slapped with a national security “supply chain risk” label from the White House - usually reserved for foreign adversaries - which could critically disrupt Anthropic’s other business partnerships. Hegseth also threatened that the government could invoke the Defense Production Act to force Anthropic to serve them their systems tailored “to the military's needs", though it hasn’t yet materialized.

Just hours later on Friday, OpenAI announced they had struck a deal with the Pentagon allowing them to use their models across their classified network. While OpenAI CEO Sam Altman had expressed support for Anthropic’s redlines and claims that OpenAI shares them, it’s still unclear on how - if at all - the Pentagon will respect them when using OpenAI’s models. The move has been met with further skepticism from the public, with a call for ChatGPT users to cancel their subscriptions spreading across social media.

Anthropic drops safety pledge: The same week as their showdown with Hegseth, news broke that Anthropic is dropping a central pillar of their Responsible Scaling Policy (RSP), in which they pledge to never train an AI system unless they can guarantee in advance that their safety measures are adequate. While they insist they're not abandoning safety, they’re replacing firm pre-deployment guarantees with looser commitments to transparency, risk reports, and “Frontier Safety Roadmaps,” essentially switching from an offensive to defensive strategy. Not a reassuring move from the AI company that’s built their brand on safety.

AI goes nuclear: Especially relevant given the Anthropic-DoW battle, King’s College London ran war game simulations with AI systems from Anthropic, OpenAI, and Google, and found that in 95% of scenarios they chose to deploy nuclear weapons. The LLMs frequently escalated conflicts to nuclear strikes, showing little hesitation even after being reminded of the catastrophic human consequences. None of them chose full de-escalation or surrender - in fact, when facing defeat they tended to escalate instead.

Heads Up

Other don't-miss updates from FLI, and beyond.

Protect What's Human: We launched a new multimillion dollar campaign, Protect What’s Human, to rally people across the U.S. in support of regulating AI. We’re aiming to engage as many Americans as we can on the impacts AI will have on their livelihoods, families, and communities. Find one of our main ads below, along with the Protect What’s Human designated social channels if you’d like to share:

New Utah bill: As part of an effort reportedly led by “AI czar” David Sacks, the White House is opposing Utah’s HB 286 - a relatively mild new AI transparency bill focused on child safety - despite polling showing ~90+% of Trump and Harris voters alike want Congress to prioritize child safety over tech industry growth. The bill simply requires large AI developers to publish safety plans addressing risks such as self-harm encouragement, yet the administration says it’s “categorically opposed,” raising doubts about what, if any, state-level AI legislation would be acceptable. As FLI’s Jason Van Beek and the Institute for Family Studies’ Michael Toscano wrote in a joint op-ed, “the White House’s AI policy is overwhelmingly unpopular with voters, including Trump’s base. And now, with its move against Utah, the White House’s AI policy risks further polarizing it against its own voters.”

DeSantis on AI policy: FLI President Max Tegmark joined Florida Gov. Ron DeSantis’ roundtable on AI policy a few weeks ago, alongside fellow AI experts advocating for common sense rules on the AI industry.

"[Sam Altman] is telling me that my 3-year-old son has only two choices in life: put electrodes in his head or never get a job, become obsolete."

Max Tegmark at Gov. DeSantis’ roundtable

You can find the full recording below:

We’re hiring: We’re looking for a Communications Associate to support our outreach team on media relations work, social media, and more. The role is remote from the U.S., preferably on the west coast. Learn more and apply by March 20th at the link here, and please do share.

Munich Security Conference simulation: During the recent Munich Security Conference, we teamed up with Foreign Policy to host participants from the UN, NATO, Congress, and other industry, government, academia, and civil society leaders for an exercise simulating how advanced AI agents deployed for military and dual-use applications could result in a catastrophic loss of human control. We'll share the full report when it's out later this month - stay tuned!

On the FLI Podcast, host Gus Docker was joined by:

  • (Cross-post from "The Cognitive Revolution" with Nathan Labenz) Ryan Kidd, co-executive director of MATS, to discuss if AI can do our alignment ‘homework'.

  • Andrea Miotti, founder and CEO of ControlAI, on the case for a global ban on superintelligence.

We also released two new highlight reels, from Fr. Michael Baggot’s episode on a Catholic perspective of superintelligence and transhumanism, and economist Anton Korinek’s episode on the economics of an intelligence explosion: