OpenAI says it shares Anthropic's 'red lines' over military AI use
Last edited Sat Feb 28, 2026, 06:50 AM - Edit history (1)
Source: NPR
By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract. OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems.
"I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC in an interview on Friday morning. He said he thinks it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."
"For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," Altman added. "I'm not sure where this is going to go."
In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.
-snip-
Read more: https://www.npr.org/2026/02/27/nx-s1-5729118/anthropic-pentagon-openai-ai-weapons
Editing the next morning to add this note. Apparently there are some important differences between what Amodei had been insisting on and what Altman agreed to.
See replies 6 and 7 below.
SpankMe
(3,682 posts)I hope this isn't a smole screen that will dissolve over time when the news cycles on this subject fade.
highplainsdem
(61,297 posts)many customers outside the Pentagon.
And he can read polls, and he's seen Trump deteriorating.
Altman is nothing if not opportunistic.
snot
(11,646 posts)Unfortunately, that phraseology does not actually say that OpenAI shares ALL of Anthropic's red lines, just two or more.
highplainsdem
(61,297 posts)None of them are truly ethical.
But I'm glad to see some of them have some lines they're unwilling to cross.
Musk of course doesn't.
reACTIONary
(7,099 posts)... my understanding is that anthropic's red lines are exactly two:
- No wide spread domestic surveillance, and
- No fire control authority without a human in the loop.
That right there is a few, and any less in not a few.
muriel_volestrangler
(105,977 posts)The artificial intelligence startup OpenAI announced Friday night that it has reached an agreement to deploy its technology at the Pentagon while imposing the same kinds of ethical guardrails that have triggered the Trump administration to sever ties with the companys archrival Anthropic.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network, OpenAI CEO Sam Altman wrote on X, adding that the Defense Department displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
Two of the most crucial safety principles for OpenAI are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems, Altman wrote. He said the Pentagon agrees with these principles, reflects them in law and policy, and we put them into our agreement.
He added that we are asking the [Defense Department] to offer these same terms to all AI companies.
https://www.politico.com/news/2026/02/28/openai-announces-new-deal-with-pentagon-including-ethical-safeguards-00805546?utm_source=dlvr.it&utm_medium=twitter
I assume someone's lying about what was agreed, and what is different from Anthropic's safeguards, but who know what the lies are.
highplainsdem
(61,297 posts)A tweet from Josh Kale last night:
Link to tweet
@JoshKale
Everyones saying OpenAI got the same deal Anthropic was banned for.
Read the fine print. Theyre not the same:
On weapons:
Anthropic asked for no fully autonomous weapons without human oversight = a human involved in the decision.
OpenAIs deal says human responsibility for the use of force = someone accountable, which can happen after the fact.
Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after.
On surveillance:
Dario said explicitly: current law hasnt caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. Thats mass surveillance without breaking a single law.
Anthropic wanted protections beyond current law.
OpenAIs deal says the Pentagon reflects them in law and policy. Thats existing law as the safeguard, the exact law Anthropic said is insufficient.
Same words. Different agreements. Read them carefully
11:05 PM · Feb 27, 2026