"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens

https://lemmy.nz/post/34870782

"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens - Lemmy NZ

Lemmy

This honestly strikes me as a story people don’t understand. Mass surveillance is not lawful and the government thus agreed not to do that. However, they still needed the guardrails removed. People interpret this as them wanting mass surveillance, but that’s not necessarily true.

I work for a company that uses AI for legal work, processing and analyzing court cases, discovery documents, etc. We had problems with AI models like Gemini and GPT refusing to do what we needed because of guardrails against violence and abuse of minors. It refused to discuss and analyze cases that involved murders described in detail, or cases involving child molestation, etc. We weren’t used it for unlawful purposes, very much the opposite.

I feel like if people knew that we, like the DoD, had to use uncensored models that allowed such things, people would complain “Wow, you guys are trying to remove guardrails for child porn and violence! How terrible!”

Is it so shocking that a military needs their AI to work with such things even if they’re not implementing it? They cannot afford to have AI in critical moments be like “sorry, my guidelines say I can’t help with this.”

This seems like the time Trump advised against pregnant women against using Tylenol. So people started buying and using it in protest. This is yet another reaction to Trump punishing them, but people are pretending Anthropic is making a stand for the people and OpenAI is somehow not. It’s not that simple.

The military, the department of government responsible for mass murder, should not have any fucking AI in their system, absolutely anywhere. Doubly so without any sort of guardrails.
Why? I can’t think of any reason that would not also preclude their usage if all computer assisted tools.
Because no other computer assisted tools are straight up fucking wrong half the time?
If your AI tools are wrong half the time, you’re using it wrong. My legal AI is linked to databases of statutes and case law, providing results more reliable than most legal professionals.
No, I’m not using it wrong. It’s just wrong. This is not my opinion, this is a statistical facts thats been studied over and over again. People are already being harassed and endangered and jailed by cops over their own fucking eyeballs or govt documents. Now imagine those cops have fucking fighter jets and missiles and give absolutely no fucks. You and your AI can get absolutely fucked. I hope you’re disbarred like the other dumbass attorneys who show up with hallucinated laws and cases.
It’s not factual. You’re just an idiot typing a single prompt, probably with no agentic loop or curated database to keep it on line. Then you get mad like a caveman wondering why sticks only give fire half the time because you’re not fucking understanding what you’re working with.
I’m not working with anything. I did not conduct these tests. They’re conducted by scientists. It took me 12 minutes to realize how completely fucking pointless they are. They even tell you as much, right at the bottom. “Please verify critical facts”. If I have to go and fucking Google everything it tells me anyway to verify it then what is even the point?