Paper Planet (@_paperplanet_): Trump blacklists AI company Anthropic for refusing to build unaccountable killer robots

Feb 28, 2026 8:32 PM

OceansRust

Views

31025

Likes

1271

Dislikes

21

Paper Planet's sources include: TechCrunch 2.27.26 'Anthropic vs. the Pentagon: What's actually at stake' Dario Amodei public statement 2.27.26 CNBC 2.27.26 Sam Altman interview.

I’m going to miss the Onion

3 weeks ago | Likes 4 Dislikes 0

Every human should agree that every AI company should burn. Humans should be the only ones involved in warfare.

3 weeks ago | Likes 2 Dislikes 0

Well that sam altman comment aged like milk.

3 weeks ago | Likes 2 Dislikes 0

*unaccountable

3 weeks ago | Likes 2 Dislikes 0

The "Guns dont kill people" side now want a gun they can blame for killing people

2 weeks ago | Likes 2 Dislikes 0

there was a bet on one of the betting apps that he would do this before the end of the month

2 weeks ago | Likes 2 Dislikes 0

A computer can never be held accountable; therefore, a computer must never make a management decision.

3 weeks ago | Likes 6 Dislikes 0

until you put rules and laws into place holding people responsible for the computers bad management decisions. its not that complicated.

3 weeks ago | Likes 2 Dislikes 0

In a sane world you would blame the guy who decided to use AI that kills the wrong people. The military agreed that you could never have an AI 'pull the trigger' until Pete Hegseth was in charge. Though I suspect it is more about mass surveillance. Some of the top men at Palantir are important to Trump too (Peter Thiel), and that is their specialty.

3 weeks ago | Likes 2 Dislikes 0

Pete is not sane.

3 weeks ago | Likes 1 Dislikes 0

I'm more worried about the mass surveillance, it makes everything far more dangerous and I have no doubt they would turn it on without a question. We already know they have some level of mass surveillance with PRISM https://en.wikipedia.org/wiki/PRISM (and I'm sure it's grown since then), but I doubt they have a way to effectively use that information.

3 weeks ago | Likes 5 Dislikes 1

I suspect this has more to do with the mass surveillance stance than the autonomous killing stance.

3 weeks ago | Likes 2 Dislikes 0

why use Ai slop to make a video about this. I can't take this seriously. Everything said in this is basically worthless trash that I would have to double and triple check. It's like a scam email I am not reading this on the off chance it could be real.

3 weeks ago | Likes 2 Dislikes 1

“We used a slop machine to make a garbled video about how a lone slop company in the slop industry is holding to their contract and doing the bare minimum”

3 weeks ago | Likes 2 Dislikes 0

the bad blood started when Anthropic refused to hire one of pedotus' goons (Eric or Junior) as CFO of the company.

3 weeks ago | Likes 4 Dislikes 0

Source?

3 weeks ago | Likes 3 Dislikes 0

It's like trying to blame a car when you wreck it. Nah bitch, you were still in control of the technology

3 weeks ago | Likes 2 Dislikes 1

So. Here's the idea. We feed autonomous AI operated weapons our current laws, then we feed them the Epstein Files, and we provide a directive to determine if drone strikes on those people in the files would make the world a better place. Then we wake up one morning, and Trump and his cabinet are just gone. And then politicians that remain will make a law banning autonomous AI WMD.

3 weeks ago | Likes 7 Dislikes 2

Alec Baldwin picked the wrong day to do his impression

3 weeks ago | Likes 1 Dislikes 0

Never thought I'd be lucky enough to be around when THEY INVENTED SKYNET...

3 weeks ago | Likes 3 Dislikes 0

3 weeks ago | Likes 14 Dislikes 0

Send your company to Canada. Well make use lol

3 weeks ago | Likes 1 Dislikes 0

3 weeks ago | Likes 1 Dislikes 0

Republicans Specifically the Trump admin. Want to commit crimes and never go to jail. I feel like a new law should be enacted. If you us AI FOR ANYTHING and it goes kills you and everyone apart of the administration should face jail time. Watch them avoid it after that.

3 weeks ago | Likes 1 Dislikes 0

luckily american citizens will do their job of defending the rest of the world from this threat by deposing their dictator.

3 weeks ago | Likes 1 Dislikes 0

Also when you blame AI you can basically blame the company that produced it. That's probably why they refused to allow their AI be used for things more malicious than the usual.

3 weeks ago | Likes 1 Dislikes 0

So, like, hypothetically if an AI drone dropped a bomb on a golf course somewhere and a fat orange thing with some human features exploded and pieces of McDonald Big Macs were found in a 500 yard radius no one would be responsible?

3 weeks ago | Likes 1 Dislikes 0

The fact that this is an AI generated video is insanely tone deaf.

3 weeks ago | Likes 14 Dislikes 2

It's actually the perfect use for AI.

3 weeks ago | Likes 1 Dislikes 8

The perfect use for AI is turning the data center into something useful, like undeveloped land.

3 weeks ago | Likes 3 Dislikes 0

What if I told you half of imgur is an idiot.

3 weeks ago | Likes 1 Dislikes 0

If you are making a video defending Anthropic then you do not think AI is inherently evil, just dangerous and requiring caution. Anthropic was founded by the researchers who said that the current AI revolution should be done slowly and cautiously and realized nobody was listening.

3 weeks ago | Likes 1 Dislikes 0

The incredibly sad thing is that if we are to look at history for new things that could destroy everything, we are that they tend to stay. There are always people pushing limits, there is always warning. There will always be people dying in greater numbers just like the warning predicted. Then things are adjusted and it's just a new horror we live with. It's happened with pretty much every new weapon type.

3 weeks ago | Likes 21 Dislikes 1

The difference is there was always a person who had to pull the trigger or push the button.

3 weeks ago | Likes 6 Dislikes 0

Oh I realize it's exponential and just makes greater leaps towards our inevitable end

3 weeks ago | Likes 1 Dislikes 0

Shit's fucked when an AI company are the (relatively speaking) good guys.

3 weeks ago | Likes 4 Dislikes 0

Weird isn't it that they're the last line of defense against the government and department of defense.

3 weeks ago | Likes 2 Dislikes 1

They aren’t though, they’re a single company in an inherently wasteful industry that’s still destroying resources to make slop, this video included. C’mon, they couldn’t figure out what PENTAgon means?

3 weeks ago | Likes 1 Dislikes 0

Anthropic didn't make this video. An artist for NPR did, he uses paper animations to make his videos. Anthropic is in a direct fight against the Government who have taken over most of the AI Companies, along with the media – social, news, network. IDK why the artist distorted certain things in his video, but a lot of people on Imgur simply want to fight the wrong battles, or just aid the bots both foreign and domestic in whatever their goals are. Freedom's on fire yo.

3 weeks ago | Likes 1 Dislikes 0

Apparently AI couldn't figure out how many sides to put on something called The Pentagon.

3 weeks ago | Likes 15 Dislikes 1

Yeah, the most ethical slop machine is still just a slop machine

3 weeks ago | Likes 2 Dislikes 0

Thereby proving anthropic's point.

3 weeks ago | Likes 5 Dislikes 1

They're not good, They're a Peter Thiel funded group?

3 weeks ago | Likes 87 Dislikes 5

No, that would be palantir. https://en.wikipedia.org/wiki/Anthropic

3 weeks ago | Likes 1 Dislikes 0

Nothing is binary. Grey is the color of the world. Not white, Nor black.

3 weeks ago | Likes 7 Dislikes 2

If what you claim is true, then none of this would be an issue would it. There definitely wouldn't be a public spat between Anthropic and the duo of Trump and Hegseth.

3 weeks ago | Likes 12 Dislikes 1

I'm afraid it's still in the interest of any company, good or evil, to avoid the liability. If the AI made the mistake, the creators of that AI end up with the blame. It just makes sense to refuse, especially with an administration that will happily scapegoat at the nearest opportunity. Still the right thing to do for ethical reasons too, I just don't think it's enough to prove motive on its own.

3 weeks ago | Likes 1 Dislikes 1

Peter Thiel funded does not in any way equate to "bad" it just makes it more likely for those companies to do bad things to appease the shareholders since they are lacking the guardrails that any other corporation would have like having a conscience. They can still have good principals and do good things as Thiel generally isn't directly involved in the decision making.

3 weeks ago | Likes 5 Dislikes 4

D̶o̶n̶'̶t̶ ̶B̶e̶ ̶E̶v̶i̶l̶

3 weeks ago | Likes 13 Dislikes 0

D̶o̶n̶'̶t̶ Be Evil

3 weeks ago | Likes 3 Dislikes 0

I haven't forgotten

3 weeks ago | Likes 6 Dislikes 0

But they’re not wrong.

3 weeks ago | Likes 43 Dislikes 1

Broken clocks and all that

3 weeks ago | Likes 16 Dislikes 0

Which means it's a negotiating tactic for PR.

3 weeks ago | Likes 5 Dislikes 12

No, it fucking doesnt. It just means they dont want killer robots exterminating humanity.

3 weeks ago | Likes 15 Dislikes 4

Someone learned from Terminator, Evolver, Robocop and other movies. Maybe Peter Thiel and the other billionaires want to die while they have money but just want everyone else going down with them.

3 weeks ago | Likes 6 Dislikes 1

They want OTHER people to die. Not them.

3 weeks ago | Likes 5 Dislikes 0

It can’t be emphasized enough: in July 2025, the Trump DoD accepted Anthropics terms and conditions for use of their product and signed a contract to adhere to those conditions. Only after Palantir complained that it couldn’t work around those conditions did it suddenly become Anthropic cheating the DoD out of their “right” to do whatever they wanted

3 weeks ago | Likes 326 Dislikes 1

Well, I am sure they will win in court and trump will continue to ignore the courts.

3 weeks ago | Likes 31 Dislikes 0

It’s also hard to explain how unprecedented the actions they’ve threatened against Anthropic are. Essentially labeling
private American company an enemy of the state and persona non grata. Technically, employees of companies with any federal contract couldn’t use any of Anthropic’s tools, even for work unrelated to those contracts.

3 weeks ago | Likes 77 Dislikes 0

So now OpenAI shares those same red lines with Anthropic, but the DoD just immediately accepted that and signed a contract with them right after they banned Anthropic? Is it just me or does anybody else have vibes that there's something else going on behind the scenes here causing the shift over to OpenAI? Is Sam Altman more buddy buddy with this administration? Last I'd heard, his company was more on a slippery slope of the inflated bubble (given wildly larger expenses and very little revenue).

3 weeks ago | Likes 28 Dislikes 0

The Pentagon announced that they will use Grok instead because Musk doesn't have those pesky moral quandries.

3 weeks ago | Likes 1 Dislikes 0

Sort of move I'd expect from the current US administration, focus on hype and big shows of basically nothing instead of actual productivity.

3 weeks ago | Likes 8 Dislikes 0

Open AI just knelt down and ate diaper ass, that's what happened. They don't share the same red line.

3 weeks ago | Likes 3 Dislikes 0

OpenAI is loosing billions, so Sam Altman caving would make a lot of sense because it's his ass that's on the line of the company files for bankruptcy.

3 weeks ago | Likes 24 Dislikes 0

I mean, not really. He's divested (almost?) entirely from it. And even if he didn't, he has investments in hundreds of other things. Yeah, it's a visual and prominent thing for him, sure. But unlike openai employees, if it explodes, he's still going to afford a house (and a second and third) just fine. Don't get me wrong he WILL cave to at least some things 'to save openai', but he'd be less impacted than regular folk.

3 weeks ago | Likes 1 Dislikes 0

that was my take too. altman said he supported anthropic and "mostly" agreed with them but that "mostly" probably leaves room for a lot of loopholes and gray area for the DoD to work around while also giving openai plausible deniability

3 weeks ago | Likes 7 Dislikes 0

"mostly" means = until I'm paid A SHIT TON to ignore it.

3 weeks ago | Likes 3 Dislikes 0

Feels like another episode of Black Mirror waiting to happen. At least I feel like it was an episode of it, or maybe just a short made by some of the talent behind it. Idk.

Thing had these small autonomous drones that were basically tiny suicide bombers. Delivered a small.payload of explosives that would blow a person's head off. Worked off of a facial recognition system and swarm intelligence and were touted as ways of taking put terrorist groups or whatever. The story had them immediately-

3 weeks ago | Likes 3 Dislikes 0

-turned on a university, with the things having been given a kill list of basically all the students. So just a swarm of small drones flying in, blowing up, and confirming people as dead. Horrifying piece of media from years back. And now some group of jackasses.is deciding that they want to, yet again, create the Torment Nexus.

Awesome timeline we have. Just...great. Real. Fucking. Great. Love that tech bros keep deciding to create the Torment Nexus. So happy for that hubris or whatever. /s

3 weeks ago | Likes 4 Dislikes 0

I know an ethics consultant that works with AI developers... and the important part of that is the AI developers need ethics consultants, because they are not seeing how problematic their shit is themselves.

3 weeks ago | Likes 1 Dislikes 0

Insane how this entire issue appears to be going the way that sci-fi authors, sociologists and philosophers have been predicting for 100+ years, right? Who would have thought that such a thing were possible? /s

3 weeks ago | Likes 49 Dislikes 2

almost like these oh so creative and innovative Tech products arent as creative as companies like to pretend

3 weeks ago | Likes 2 Dislikes 1

The crazy part is most of them saw this kind of thing as an unintentional side affect of an AI they couldn't control, even most of them never predicted it as an intentional choice...

3 weeks ago | Likes 5 Dislikes 0

Yeah how did all those end again?
The little plucky peasants win don’t they?

3 weeks ago | Likes 8 Dislikes 0

Either that or all of humanity destroys itself.

3 weeks ago | Likes 7 Dislikes 0

The Hound in those books is terrifying. It has syringes in its jaws and super fast speed and super strength and is extremely precise. Basically if the chief decides to send the Hound after you, it'll get you no matter what. Boston Dynamics SPOT is not really close to those capabilities yet.

3 weeks ago | Likes 4 Dislikes 0

Robot with syringes for offensive use you say? https://www.youtube.com/watch?v=e0k2YvEns-M

3 weeks ago | Likes 4 Dislikes 0

Actually, that first sentence – about the trump administration declaring war, but not on a foreign country – didn't age very well, did it?

3 weeks ago | Likes 76 Dislikes 0

Whatever do you mean? Killing citizens of the right colour?
So many 2 pick and choose from?

3 weeks ago | Likes 1 Dislikes 0

If this administration has proven anything, it's that they can do a LOT of awful things at the same time.

3 weeks ago | Likes 12 Dislikes 0

And Sam Altman's OpenAI signed a deal with Pentagon that crosses his "red line".

3 weeks ago | Likes 14 Dislikes 1

Please provide a source, so I can get my company to cancel it's ChatGPT agreement and move to Claude

3 weeks ago | Likes 6 Dislikes 0

It appears the reporting I saw was wrong. Reading the NPR reporting there is obviously something left out, as supposedly OpenAI is getting the provisions Anthropic wouldn't get in their contract. https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

3 weeks ago | Likes 6 Dislikes 0

Thank you. I'm still very curious because if OpenAI is holding them to that same guideline, what's the incentive to change other than pride?

3 weeks ago | Likes 1 Dislikes 0

Afaik the US didn't declare war on Iran. They just attacked.

3 weeks ago | Likes 7 Dislikes 0

Hmmm…just like Japan did on December 7th, 1941.

3 weeks ago | Likes 1 Dislikes 2

Japan did declare war on the US, it just arrived too late due to translation times

3 weeks ago | Likes 1 Dislikes 0

Comparing one war crime to another doesn't make the newer one any better... it makes it worse.

3 weeks ago | Likes 2 Dislikes 1

I’m agreeing with you. ‘Preempitve strike’ and ‘sneak attack’ are the same thing spelled differently.

3 weeks ago | Likes 1 Dislikes 0