AI goes to war


Summary

The episode examines the increasing integration of artificial intelligence into modern warfare, focusing on the United States’ use of AI in its 2026 conflict with Iran and other operations. It begins by contrasting President Trump’s vague explanation for the war with a clearer AI-generated rationale, highlighting the irony that AI is not only explaining but actively participating in the conflict.

Paul Shari, author of “Four Battlegrounds: Power in the Age of Artificial Intelligence,” details how the military uses large language models like ChatGPT and Anthropic’s Claude for processing intelligence, planning operations, and analyzing targets. He explains that AI is being used to handle vast amounts of data from satellite imagery and other sources at machine speed. The discussion expands to other conflicts, noting AI’s role in Ukraine for autonomous drone targeting and in Israel for rapidly generating targeting packages in Gaza, raising concerns about human oversight becoming a mere “rubber stamp.

The conversation then shifts to a major controversy: the Pentagon’s contract dispute with AI lab Anthropic. Axios reporter Maria Curie explains that Anthropic, positioning itself as a safety-first company, refused to sign a contract with an “all lawful purposes” clause due to concerns over enabling domestic mass surveillance and autonomous weapons. This led Defense Secretary Pete Hegseth to threaten cutting ties, designating Anthropic a supply chain risk. Shortly after, the Pentagon announced a contract with OpenAI, which initially faced similar criticism but later added specific prohibitions against collecting commercially acquired information—the very safeguard Anthropic had sought.

The episode concludes by questioning the consistency and motivations behind the Pentagon’s decisions, noting that Anthropic’s models are deeply integrated into current military operations in Iran and Venezuela. The overarching theme is the lack of comprehensive legal frameworks to govern military AI, leaving critical decisions in the hands of either government officials or private companies, with Congress largely absent from the conversation.


Bookmarks

  • 00:10:29AI recommends nuclear strikes in war game simulations A New Scientist study found that AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in 95% of simulated war game scenarios. While no one is currently connecting LLMs to actual nuclear decisions, the finding raises serious concerns about AI judgment in high-stakes military contexts.
  • 00:20:59Pentagon vs. Anthropic: contract blocked over political pressure While the Pentagon and Anthropic were finalizing language for a contract, Pete Hegseth tweeted designating Anthropic a supply chain risk — immediately banning military contractors from working with them. Trump amplified the move on Truth Social, framing Anthropic as “left wing nut jobs”.

Recommendations

Books

  • Four Battlegrounds: Power in the Age of Artificial Intelligence — Paul Shari’s book on AI and military power is referenced as he provides expert analysis on how AI is being adopted for warfare.

Companies

  • Anthropic — The AI lab behind Claude, which has a long-standing contract with the Pentagon but recently clashed over ethical safeguards regarding autonomous weapons and domestic surveillance.
  • OpenAI — The AI company that signed a Pentagon contract after Anthropic’s dispute, initially facing criticism for weak safeguards before revising its terms.

Topic Timeline

  • 00:00:00Introduction to AI’s role in the Iran war — The episode opens by comparing President Trump’s confusing explanation for the war with Iran to a clear, AI-generated rationale. The host introduces the central theme: the U.S. is not only using AI to explain the war but to fight it, marking a new era in warfare.
  • 00:02:15How the military uses AI in operations — Paul Shari explains how the U.S. military uses large language models like Claude and ChatGPT in classified networks for operations in Iran and Venezuela. He describes AI’s strength in processing vast amounts of information—such as satellite imagery of bombed targets—to plan and prioritize new strikes at machine speed, far faster than humans could.
  • 00:04:51AI in Ukraine and Israel’s conflicts — Shari details AI’s use in other global conflicts. In Ukraine, small AI boxes enable drones to autonomously carry out attacks after a human locks onto a target. In Israel’s operations in Gaza, machine learning systems fuse geolocation, cell phone, and social media data to generate targeting packages rapidly, raising concerns about diminished human oversight.
  • 00:07:35The risks and ethics of autonomous weapons — The discussion turns to the trajectory toward fully autonomous weapons and the ethical debate. Shari compares the unpredictable battlefield to the challenges of self-driving cars. He argues that AI could make warfare more precise and reduce civilian casualties, but only if militaries prioritize that goal. The segment also mentions an alarming study where AI models frequently recommended nuclear strikes in war games.
  • 00:11:45AI’s flaws: sycophancy and hallucinations — Shari highlights critical flaws in large language models that make them risky for military use. He points out their tendency toward sycophancy (agreeing with everything) and hallucinations (making things up). These traits could reinforce human biases, create an illusion of authority, and lead to poor intelligence analysis if users are not deeply skeptical.
  • 00:18:33The Pentagon-Anthropic contract dispute — Maria Curie explains the origins of the fight between Defense Secretary Pete Hegseth and Anthropic. Anthropic’s CEO, Dario Amodei, advocated for federal AI regulation and refused a Pentagon contract over concerns about enabling domestic mass surveillance and autonomous weapons. The Pentagon viewed this as a private company dictating national security policy.
  • 00:21:48Political fallout and the shift to OpenAI — After Hegseth threatened to cut ties with Anthropic, President Trump denounced the company on Truth Social. The Pentagon then quickly signed a contract with OpenAI. Critics, including on social media, pointed out that OpenAI’s initial contract had similar loopholes. Under public pressure, OpenAI later revised its terms to include the specific prohibitions Anthropic had wanted.
  • 00:26:44Analysis of motives and the need for law — Curie analyzes the inconsistent and personality-driven nature of the Pentagon’s decisions, suggesting Sam Altman was seen as more “reasonable” than Anthropic’s leadership. She notes that Anthropic’s models are still actively used in Iran and Venezuela operations. The segment concludes by emphasizing the vacuum left by Congress’s failure to pass laws governing military AI, leaving rules to be set by either the Pentagon or private companies.

Episode Info

  • Podcast: Today, Explained
  • Author: Vox
  • Category: News Daily News Politics
  • Published: 2026-03-04T19:00:00Z
  • Duration: 00:25:58

References


Podcast Info


Transcript

[00:00:00] This is what President Trump had to say about why the United States is at war with Iran.

[00:00:05] We sought repeatedly to make a deal.

[00:00:09] We tried, they wanted to do it, they didn’t want to do it, again they wanted to do it,

[00:00:14] they didn’t want to do it, they didn’t know what was happening.

[00:00:17] Not the best explanation for a war of choice, sir.

[00:00:22] I’m personally a do-my-own-research kind of guy, but let’s ask AI why we’re at war with

[00:00:28] Iran.

[00:00:30] The United States attacked Iran in 2026 because it claimed Iran posed an imminent threat,

[00:00:36] particularly due to Iran’s advancing nuclear program and missile capabilities, and aimed

[00:00:41] to reduce Iran’s ability to project power in the region.

[00:00:44] Wow, that was a better explanation.

[00:00:47] Thanks, Chat.

[00:00:48] Fitting that AI was more clear than the President of the United States, because it turns out

[00:00:52] the United States is using AI to fight the war in Iran.

[00:00:56] The future of war is AI, and that future is now here.

[00:01:01] I’m Sean Bacchus-Furum, and that’s coming up on Today Explained from Vox.

[00:01:08] AI can fix healthcare.

[00:01:11] I’m Henry Blodgett, and this week on my show Solutions, I had a fascinating conversation

[00:01:16] with Dr. Bob Wachter, author of A Giant Leap, How AI is Transforming Healthcare, and What

[00:01:22] It Means for Our Future.

[00:01:23] Dr. Wachter was not expecting to be an AI optimist.

[00:01:27] What convinced him?

[00:01:28] Follow Solutions with Henry Blodgett wherever you get your podcasts to hear more.

[00:01:34] This week on Net Worth and Chill, I’m taking you inside my sold out New York City book

[00:01:38] tour stop for my brand new book, Well Endowed.

[00:01:41] I sat down with the hilarious Heather McMahon for a night of laughs, real money talk, and

[00:01:45] honest financial truths.

[00:01:47] We’re getting into everything the book covers from how to actually build wealth, how to

[00:01:51] protect it, and how to stop leaving money on the table.

[00:01:54] Whether you’ve already grabbed your copy of Well Endowed or you’re still on the fence,

[00:01:57] this episode will show you exactly why everyone’s talking about it.

[00:02:00] Listen wherever you get your podcasts or watch on youtube.com slash your rich BFF.

[00:02:15] Paul Shari knows a lot about AI and how our military’s using it.

[00:02:19] He’s the author of four battlegrounds, Power in the Age of Artificial Intelligence.

[00:02:25] We’ve seen a trajectory of the military adopting AI tools over the last decade as AIs continue

[00:02:34] to progress.

[00:02:35] What’s newer are large language models like ChatGPT and Tropic’s Claude that it’s been

[00:02:42] reported the military is using in operations in Iran.

[00:02:46] And so that’s a pretty significant development that we’re seeing.

[00:02:49] And the people want to know how Claude or ChatGPT might be fighting this war.

[00:02:53] Do we know?

[00:02:54] War in Iran?

[00:02:55] That’s a great idea.

[00:02:56] Let me help you with that.

[00:02:57] Well, we don’t know yet.

[00:03:00] You know, we can make some educated guesses based on what the technology could do.

[00:03:06] AI technology’s really great at processing large amounts of information.

[00:03:10] I literally love processing.

[00:03:12] The US military’s hit over a thousand targets in Iran.

[00:03:15] As you see very well, they have no Navy.

[00:03:17] It’s been knocked out.

[00:03:19] They have no Air Force that’s been knocked out.

[00:03:21] They have no air detection that’s been knocked out.

[00:03:23] Their radar has been knocked out.

[00:03:25] They need to then find ways to process information about those targets.

[00:03:30] So satellite imagery, for example, of the targets they’ve hit.

[00:03:33] It’s about everything’s been knocked out.

[00:03:35] Looking at new potential targets, prioritizing those, processing information, and using AI

[00:03:40] to do that at machine speed rather than human speed.

[00:03:44] Human so slow, muahaha.

[00:03:47] Cheers.

[00:03:48] Do we know any more about how the military may have used AI in, say, Venezuela on the

[00:03:55] attack that brought Nicolas Maduro to Brooklyn, of all places?

[00:04:01] Because we’ve recently found out that AI was used there, too.

[00:04:05] So what we do know is that Anthropix AI tools have been integrated into the US military’s

[00:04:10] classified networks.

[00:04:12] And so they can process classified information, to process intelligence, to help plan operations.

[00:04:18] From writing emails to raiding enemy capital cities.

[00:04:22] The Wall Street Journal reports that the Pentagon used Anthropix AI model Claude as part of

[00:04:26] its operation to capture Venezuelan President Nicolas Maduro.

[00:04:30] There’s no suggestion that Claude was actually firing any of the missiles or manning any

[00:04:34] of the machine guns.

[00:04:35] Yeah, we’ve had these sort of tantalizing details, okay, that these tools were used

[00:04:39] in the Maduro raid.

[00:04:40] We don’t know exactly how.

[00:04:42] So we’ve seen AI technology in a broad sense using other conflicts as well in Ukraine,

[00:04:51] in Israel’s operations, in Gaza, to do a couple different things.

[00:04:55] One of the ways that AI is being used in Ukraine in a different kind of context is putting

[00:05:02] autonomy onto drones themselves.

[00:05:08] The drone now flies on autopilot mode using our software.

[00:05:12] We assigned it with a mission and it built its own flying route, giving the munitions

[00:05:16] instructions on where it needs to go and what it needs to look for.

[00:05:20] And so when I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers

[00:05:26] demonstrate is a little box like the size of a pack of cigarettes that you could put

[00:05:32] onto a small drone that would enable that once the human locks onto a target, the drone

[00:05:39] can then carry out the attack all on its own.

[00:05:42] And that has been used in a small, small way.

[00:05:44] It’s not necessarily widespread use in Ukraine today.

[00:05:47] So we’re seeing AI begin to creep into all of these aspects of military operations in

[00:05:53] intelligence and planning and logistics, but also right at the edge in terms of, you know,

[00:05:59] being used where drones are completing attacks.

[00:06:03] OK, so we know a little bit more about how this technology was used in Ukraine.

[00:06:07] How about with Israel and Gaza?

[00:06:10] So there’s been some reporting about how the Israel Defense Forces have used AI in Gaza,

[00:06:16] not necessarily large language models, but machine learning systems that can synthesize

[00:06:20] and fuse large amounts of information, geolocation data, cell phone data and connection, social

[00:06:27] media data to bring this together, process all of that information very quickly to develop

[00:06:32] targeting packages, particularly in the early phases of Israel’s operations.

[00:06:38] Which suggests specific possible targets, possible munitions, warnings.

[00:06:43] This system produces targets in Gaza faster than a human can.

[00:06:47] But it raises thorny questions about human involvement in these decisions.

[00:06:53] One of the criticisms that had come up was that humans were still approving these targets,

[00:06:59] but that the volume of strikes and the amount of information that needed to be processed

[00:07:05] was such that maybe human oversight in some cases was a little bit more of a rubber stamp.

[00:07:10] The question is, where does this go?

[00:07:13] And are we heading in a trajectory where over time humans get pushed out of the loop and

[00:07:18] we see down the road fully autonomous weapons that are making their own decisions about

[00:07:23] whom to kill on the battlefield?

[00:07:26] That’s the direction things are headed.

[00:07:28] So no one’s unleashing the swarm of killer robots today.

[00:07:31] But the trajectory is in that direction.

[00:07:35] And maybe I’ll make a comparison here to self-driving cars, where car companies can map the environment

[00:07:42] down to the centimeter.

[00:07:43] They know the height of the curbs, they know where the stoplights are.

[00:07:47] They can test self-driving cars in the actual environment they’re going to be in.

[00:07:51] And when they do something weird that doesn’t work, they can update the algorithm.

[00:07:55] We don’t know our future.

[00:07:56] Where is it going to be fought?

[00:07:57] It’s an adversarial environment.

[00:07:58] We don’t know what the enemy is going to do.

[00:08:00] I mean, the US military is finding this out right now in its operations against Iran.

[00:08:06] They’re retaliating against US bases, against Gulf states, against Israel, using drones

[00:08:11] and missiles.

[00:08:12] And now we’re in a phase in the Iran conflict where things become super unpredictable.

[00:08:19] People do an OK job of adapting that unpredictability, AI is not so great and sometimes does some

[00:08:25] strange things.

[00:08:27] Because you drew a parallel to self-driving cars, we’ve made an episode about self-driving

[00:08:32] cars before in which I think our guests said something like, well, if you’re worried about

[00:08:35] self-driving cars, you know what you should really be worried about is humans.

[00:08:38] Thinking about Iran, we saw reports that a school was bombed in Iran where maybe 160

[00:08:44] were killed.

[00:08:45] A lot of them young girls, children.

[00:08:49] Presumably that was a mistake made by a human.

[00:08:54] Do we think that autonomous weapons will be capable of making that same mistake or will

[00:08:59] they be better at war than we are?

[00:09:04] This question of will autonomous weapons be better than humans or not is like one of the

[00:09:09] core issues of the debate surrounding this technology because proponents of autonomous

[00:09:14] weapons will say, look, people make mistakes all the time and machines might be able to

[00:09:18] do better.

[00:09:20] Part of that depends on how much the militaries that are using this technology are trying

[00:09:26] really hard to avoid mistakes.

[00:09:28] If militaries don’t care about civilian casualties, then AI can allow militaries to simply strike

[00:09:35] targets faster, in some cases even commit atrocities faster, if that’s what militaries

[00:09:41] are trying to do.

[00:09:43] I think there is this really important potential here to use the technology to be more precise.

[00:09:51] If you look at the long arc of precision guided weapons, let’s say over the last century or

[00:09:58] so, it’s pointed towards much more precision and warfare.

[00:10:01] If you look at the example of the US strikes in Iran right now, it’s worth contrasting

[00:10:07] this with the widespread aerial bombing campaigns against cities that we saw in World War II,

[00:10:13] for example, where whole cities were devastated in Europe and Asia because the bombs just

[00:10:18] weren’t precise at all.

[00:10:20] And so air forces dropped just massive amounts of ordnance to try to hit even a single factory.

[00:10:26] The possibility here is that AI could make it better over time to allow militaries to

[00:10:32] hit military targets and avoid civilian casualties.

[00:10:36] Now, if the data is wrong and they’ve got the wrong target on the list, they’re going

[00:10:40] to hit the wrong thing very precisely, and AI is not necessarily going to fix that.

[00:10:46] On the other hand, I saw a piece of reporting in New Scientist that was rather alarming.

[00:10:52] The headline was, AIs can’t stop recommending nuclear strikes in war game simulations.

[00:10:59] I don’t know if you saw that one.

[00:11:01] They wrote about a study in which OpenAI, Anthropic, and Google opted to use nuclear

[00:11:05] weapons in simulated war games in 95 percent of cases, which I think is slightly more than

[00:11:12] we humans typically resort to nuclear weapons.

[00:11:17] Should that be freaking us out?

[00:11:18] It’s a little concerning.

[00:11:19] It’s a little concerning.

[00:11:20] So, look, I think happily, as near as I can tell, no one is connecting large language

[00:11:27] models to decisions about using nuclear weapons.

[00:11:30] But I think it points to some of the strange failure modes of AI systems.

[00:11:36] So they tend towards sycophancy.

[00:11:38] They tend to simply just agree with everything that you say.

[00:11:41] I think anyone that’s interacted with some of these models, they can do it to the point

[00:11:45] of absurdity sometimes where, oh, that’s brilliant, the model will tell you.

[00:11:50] That’s a genius thing.

[00:11:51] War in Iran?

[00:11:52] That’s a great idea.

[00:11:53] Let me help you with that.

[00:11:54] You know, and you’re like, I don’t think so.

[00:11:57] That’s a real problem when you’re talking about intelligence analysis.

[00:12:01] Do we think like GPT is telling Pete Hegseth that right now?

[00:12:04] I mean, I hope not, but, you know, but his people might be telling him that, you know,

[00:12:09] so you start with this ultimate yes-men phenomenon with these tools where it’s not just that

[00:12:18] they’re prone to hallucinations, which is sort of a fancy way of saying they just make

[00:12:22] things up sometimes.

[00:12:25] But also, the models could really be used in ways that either reinforce existing human

[00:12:31] biases, that reinforce biases in the data, or that people just trust them, that there’s

[00:12:38] sort of this veneer of, oh, the AI said this, so it must be the right thing to do, and people

[00:12:44] put faith in it.

[00:12:46] And you know, we really shouldn’t.

[00:12:47] We should be more skeptical.

[00:12:52] Be more skeptical, says Paul Shari.

[00:12:55] He’s the executive vice president at the Center for a New American Security.

[00:13:00] There are two big stories right now in the world of AI and war.

[00:13:05] One is the one we just talked about.

[00:13:06] The other is the drama between Claude and Pete.

[00:13:11] That drama is forthcoming on Today Explained.

[00:13:16] Work, school, chores, bills, those are just a few of the things that act like energy vampires

[00:13:24] throughout your day.

[00:13:25] It can be hard to try and get everything done when you’re running an empty.

[00:13:28] That’s why there’s IM8’s Daily Ultimate Essentials.

[00:13:32] It’s an all-in-one wellness drink that gives your body the support it needs without juggling

[00:13:37] a bunch of different supplements.

[00:13:39] IM8’s Daily Ultimate Essentials is the go-to for getting the benefits of 16 different supplements

[00:13:45] in one tasty drink.

[00:13:47] Co-founded by David Beckham and crafted with insight from experts at Mayo Clinic, Cedar

[00:13:51] Sinai and a former NASA chief scientist, it simplifies your wellness routine and is loaded

[00:13:57] with 92 nutrient-rich ingredients such as vitamins, minerals, adaptogens, CoQ10, MSM,

[00:14:04] and pre-, pro-, and post-biotics.

[00:14:07] Plus, it’s vegan, gluten-free, and non-GMO.

[00:14:12] Feel your best self every day.

[00:14:15] Go to im8health.com slash explained and use code EXPLAINED for a free welcome kit, five

[00:14:22] free travel sachets, plus 10% off your order.

[00:14:26] That’s IM number 8, H-E-A-L-T-H dot com slash explained.

[00:14:32] Code EXPLAINED for a free welcome kit, five free travel sachets, plus 10% off your order.

[00:14:38] IM8health.com slash explained.

[00:14:42] Code EXPLAINED.

[00:14:43] These statements have not been evaluated by the Food and Drug Administration.

[00:14:45] This product is not intended to diagnose, treat, cure, or prevent any disease.

[00:14:52] Support for Today Explained comes from Rippling.

[00:14:55] No one likes running a bunch of disconnected tools to do simple tasks.

[00:14:59] So if your company is using an all-in-one platform, it should actually be able to do

[00:15:03] it all.

[00:15:04] Rippling says that their platform can do it all.

[00:15:07] It’s a unified platform for global HR, payroll, IT, and finance.

[00:15:12] With Rippling, they say workflows that normally bounce across multiple tools and departments

[00:15:17] can all just happen in one place automatically.

[00:15:21] Say an employee gets promoted or moves.

[00:15:23] Rippling can update payroll taxes, hand out new app permissions, ship a new laptop, issue

[00:15:29] a new corporate card, and assign a required manager training all in one place, without

[00:15:34] you having to put in the legwork.

[00:15:36] With Rippling, you can run your entire HR, IT, and finance operations as one, or pick

[00:15:42] and choose the products that best fill in the gaps in your software stack.

[00:15:47] So if you or your company want to run the backbone of your business on one unified platform

[00:15:52] with people at the center, you can go to rippling.com slash explained and sign up today.

[00:15:59] That’s r-i-p-p-l-i-n-g dot com slash explained to sign up.

[00:16:10] The word for today explained comes from Bombas.

[00:16:12] Perhaps you want to get in shape this year.

[00:16:14] Bombas wants to tell you about the all new Bombas sports socks engineered with sport

[00:16:19] specific comfort for running, golf, hiking, skiing, snowboarding, and all sport.

[00:16:24] Meanwhile, for the loungers among us, Bombas has non-sport footwear available.

[00:16:31] But Bombas doesn’t just offer sport and non-sport socks.

[00:16:35] They also offer super soft base layers that they claim will have you rethinking your whole

[00:16:40] wardrobe, underwear, t-shirts, flexible, breathable, buttery, smooth, premium, everyday go-tos.

[00:16:45] They say you won’t want to leave the house without.

[00:16:49] Here’s Nisha Chital.

[00:16:50] I’ve been wearing Bombas for several years now.

[00:16:53] I have several pairs.

[00:16:55] My whole family loves to wear Bombas.

[00:16:57] I have several pairs of Bombas ankle socks, and I have some no-show socks as well that

[00:17:04] are great for things like loafers and ballet flats.

[00:17:08] For every item you purchase, Bombas says an essential clothing item is donated to someone

[00:17:12] facing housing insecurity.

[00:17:14] One purchased, one donated, over 150 million donations and counting, I’m told.

[00:17:19] You can go to bombas.com slash explained and use code EXPLAINED for 20% off your first

[00:17:23] purchase.

[00:17:24] That’s B-O-M-B-A-S dot com slash EXPLAINED.

[00:17:26] Code EXPLAINED at checkout.

[00:17:33] Hey everybody, Estet Herndon here.

[00:17:35] I wanted to let you know that Vox Media is returning to South by Southwest in Austin

[00:17:40] for live tapings of your favorite podcasts.

[00:17:42] Join us from March 13th through March 15th for live tapings of Pivot, Teffy Talks, Professor

[00:17:49] G’s Markets, Where Should We Begin with Esther Perel, and the special live taping of Today

[00:17:54] Explained, hosted by yours truly.

[00:17:56] The Vox Media podcast stage will also feature sessions from Brene Brown and Adam Grant,

[00:18:02] Marcus Brownlee, Keith Lee, Vivian Tu, Robin Arzon, and more.

[00:18:07] Visit Vox Media dot com slash South by Southwest to pre-register and get a special discount

[00:18:13] on your South by Southwest Innovation badge.

[00:18:16] That’s Vox Media dot com slash South by Southwest.

[00:18:20] Hope to see you there.

[00:18:21] This is Today Explained.

[00:18:33] Pete Hegseth, our Secretary of Defense, and Claude Anthropic’s large language model got

[00:18:39] in a big fight last week.

[00:18:41] We asked Axios Tech Policy reporter Maria Curie what happened.

[00:18:47] So this actually goes back to before the Pentagon related dispute.

[00:18:51] You know, you have the CEO of Anthropic, Dario Amadi, really positioning himself as the safety

[00:18:58] first CEO.

[00:19:00] One way to think about Anthropic is that it’s a little bit trying to put bumpers or guard

[00:19:06] rails in that experiment, right?

[00:19:08] Because if you don’t, then you could end up in the world of like the cigarette companies

[00:19:11] or the opioid companies where they knew there were dangers and they didn’t talk about them

[00:19:16] and certainly did not prevent them.

[00:19:18] And he has been very vocal.

[00:19:20] He’s posted on X and talked a lot about how he does think there has to be a federal standard

[00:19:26] to regulate artificial intelligence.

[00:19:28] And that kind of put him at odds with David Sacks, the guy that’s running AI for President

[00:19:33] Trump in the White House.

[00:19:35] They’ve gotten into Twitter spats before.

[00:19:38] And so it was kind of a long time coming before this Pentagon thing blew up.

[00:19:45] This is essentially a situation where the Pentagon for a while has been trying to negotiate

[00:19:51] terms with all of the AI labs to bring them into their classified systems under this standard

[00:19:57] of all lawful purposes.

[00:19:59] And Anthropic had kind of said, you know, there are two specific scenarios in which

[00:20:05] we are not comfortable with the all lawful purposes standard.

[00:20:09] The first one is this issue of domestic mass surveillance.

[00:20:13] And the second one is autonomous weapons.

[00:20:15] It doesn’t show the judgment that a human soldier would show.

[00:20:19] Friendly fire or shooting a civilian or just the wrong kind of things.

[00:20:23] We don’t want to sell something that we don’t think is reliable and we don’t want to sell

[00:20:27] something that could get our own people killed or that could get innocent people killed.

[00:20:31] That was not taken well by the Pentagon.

[00:20:34] Defense Secretary Pete Hexeth is demanding that San Francisco-based Anthropic drop a

[00:20:39] number of safeguards or risk losing its $200 million contract.

[00:20:45] We do have a statement from the Pentagon and they’re telling us that they are currently

[00:20:49] quote reviewing its relationship with Anthropic saying, quote, our nation requires that our

[00:20:54] partners be willing to help our warfighters win in any fight.

[00:21:00] We’ve been talking to senior officials throughout this reporting process and they really view

[00:21:05] it as a private company telling the government how to protect the country and how to do

[00:21:14] national security and conduct operations.

[00:21:18] And essentially what we know is that there were phone calls happening between the Pentagon

[00:21:25] and Anthropic nailing down final language around this contract.

[00:21:31] Then all of a sudden, Pete Hexeth tweeted that he would be designating Anthropic a supply

[00:21:37] chain risk.

[00:21:39] Effective immediately, no contractor, supplier or partner that does business with the United

[00:21:44] States military may conduct any commercial activity with Anthropic.

[00:21:48] President Trump posted on Truth Social.

[00:21:51] Truth Social.

[00:21:54] The left wing nut jobs at Anthropic have made a disastrous mistake trying to strong arm

[00:22:02] the Department of War and force them to obey their terms of service instead of our constitution.

[00:22:09] Their selfishness is putting American lives at risk, our troops in danger and our national

[00:22:16] security in jeopardy.

[00:22:20] And the entire federal government was going to have to get rid of Anthropic.

[00:22:26] Essentially Anthropic had been asking for commercially acquired information and data

[00:22:33] for there to be a prohibition on that collection in the Pentagon contract.

[00:22:38] And this goes to the concern around domestic mass surveillance.

[00:22:41] The idea here is that according to Anthropic, the law has not caught up to artificial intelligence.

[00:22:49] You could have a situation where it’s perfectly legal for the Pentagon to collect commercially

[00:22:54] acquired information that could include financial information purchased from data brokers, web

[00:22:59] browsing data, beyond that voter registration rules, social media posts, whether or not

[00:23:05] you attended a protest, concealed carry permits.

[00:23:09] There’s all sorts of data out there that the government can collect in a perfectly legal

[00:23:13] way.

[00:23:15] And you could see how artificial intelligence could make it much quicker, much more efficient

[00:23:19] to have a continuous collection of that data to really pinpoint and target individuals.

[00:23:25] That was the concern.

[00:23:26] And so they were asking for this specific language and they thought they were about

[00:23:31] to get it when all of a sudden Pete Hegseth posted on X.

[00:23:34] Why did they think they were going to get it?

[00:23:36] Well, you know, they thought that this was going to be the language, the commercially

[00:23:42] acquired information coupled with the all lawful purposes.

[00:23:45] They thought that that was going to be enough, but the Pentagon actually came back and said,

[00:23:49] no, that’s not something that we are comfortable doing, which begs the question, how did this

[00:23:56] open AI deal then pass muster?

[00:23:58] Oh, that’s a spoiler, because what happens is the Pentagon drops Anthropic on Friday

[00:24:05] evening and then within what, like minutes they pick up open AI?

[00:24:10] That’s right.

[00:24:11] So they pick up open AI for a contract that was very quickly like, you know, everybody

[00:24:19] was poking holes in it on X.

[00:24:21] I don’t see this as a meaningful improvement to the contract.

[00:24:24] There still seem to be some big shortcomings slash loopholes.

[00:24:27] I agree it’s better, but I think the government can drive a truck through the intentionality

[00:24:32] language.

[00:24:33] And we heard from people familiar with the negotiations too, like this isn’t going

[00:24:39] to actually prevent domestic mass surveillance from happening.

[00:24:42] It’s still too risky.

[00:24:43] So you had Sam Altman on X trying to field all of this criticism.

[00:24:49] He, you know, he didn’t ask me anything on Saturday night where he had thousands and

[00:24:54] thousands of questions of people trying to get answers on.

[00:24:57] How did you go from a tool for the betterment of the human race to let’s work with the

[00:25:02] Department of War?

[00:25:03] If the government comes back with a memo saying that in their view, mass domestic surveillance

[00:25:09] is legal, do you do that?

[00:25:11] Were the terms that you accepted the same ones Anthropic rejected?

[00:25:15] And so you fast forward to Monday and you have Sam Altman saying, okay, we’ve gone

[00:25:21] back to the drawing board.

[00:25:23] We shouldn’t have rushed to get this out on Friday.

[00:25:26] We were genuinely trying to deescalate things and avoid a much worse outcome.

[00:25:30] But I think it just looked opportunistic and sloppy.

[00:25:33] We need to essentially add some language to this contract to give people more assurances

[00:25:39] that we are not going to conduct domestic mass surveillance.

[00:25:42] And what they added was that commercially acquired information cannot be collected and

[00:25:50] that is prohibited, which is the exact words that Anthropic was looking to have in their

[00:25:55] contract.

[00:25:56] So like so many other things with this administration, this ends up feeling rather confusing and

[00:26:00] inconsistent because they bail in Anthropic because Anthropic has these ideals, these

[00:26:06] standards.

[00:26:07] They bounce to open AI, but open AI is trying to work out a deal with the same exact standards

[00:26:11] basically.

[00:26:12] Well, now that we have the specific language and the legalese, it’s looking like it’s

[00:26:18] the exact same standards.

[00:26:20] You know, we’ve also heard from the Pentagon, from Pentagon officials saying like we were

[00:26:25] able to do this with Sam Altman because he’s reasonable.

[00:26:28] This was a reasonable negotiation and Anthropic has personal vendettas.

[00:26:34] And so to your point about inconsistencies, absolutely personalities are a factor here

[00:26:40] and it’s not all just going to come down to legalese and these two standards.

[00:26:44] Did the Pentagon just go exclusive with open AI and Sam Altman because there’s been reporting

[00:26:50] that Anthropic was actually used in these attacks on Iran that followed this drama that

[00:26:57] we had last Friday?

[00:26:58] Yeah, so Anthropic is the longest standing AI model that is being used in the Pentagon

[00:27:04] for classified purposes.

[00:27:06] We’ve established that it was used in the Maduro raid.

[00:27:09] We’ve established that it was used in the Iran raid.

[00:27:11] They’re very useful to the Pentagon.

[00:27:14] Do you know, you have senior defense officials describing how much of a pain in the ass it

[00:27:18] would be to actually get rid of Anthropic.

[00:27:20] And reportedly they didn’t.

[00:27:21] No, they haven’t yet.

[00:27:23] They were given this six month off ramp for Anthropic to be phased out and for another

[00:27:30] AI lab to be phased in.

[00:27:33] I think right now people are having these questions of was this all just Sam Altman

[00:27:37] trying to elbow out his competitor from the Pentagon?

[00:27:41] I think it’s too soon to tell.

[00:27:45] So I think what this tells us is that in the absence of a law that actually contemplates

[00:27:52] artificial intelligence, we are left as a broader country and society relying on either

[00:27:59] Pete Hegseth’s Department of War deciding how this technology is going to be used or

[00:28:06] any one individual company.

[00:28:09] And Anthropic at the end of the day is a company.

[00:28:15] And so you have all of these different parties also saying, all these companies also saying

[00:28:22] we actually do think that a law should be passed.

[00:28:25] We would love for Congress to actually just set the rules of the road because we have

[00:28:28] our own competitive pressures that we’re also dealing with.

[00:28:31] Now whether or not Congress is going to pass a law around this, I don’t know.

[00:28:35] They’ve been asleep at the wheel on almost everything.

[00:28:38] Congress has been asleep at the wheel on almost everything, says Maria Curie from Axios.com.

[00:28:53] Peter Balanon-Rosen and Heidi Mwagdi produced our show today, Jolie Meyers edited Patrick

[00:28:58] Boyd and David Tatashore mixed, Andrea Lopez-Cruzavo was on the fact check, I’m Sean Ramisforum,

[00:29:05] back at Today Explained.