“I’m just so exhausted by both sides of the AI argument” imagine being the kind of piss baby that sees a mass of people who are righteously furious at a system designed from the top down to fuck them over, and decides the anger is just as much of a problem as the exploitation

oh cool yet another “I’m exhausted by all the anti-AI anger” post got boosted onto my feed

please don’t let anyone gatekeep your own anger away from you, especially when your ability to exist and to make a living are both under active attack. if you’re angry because of LLM-enabled war crimes, the loss of our software commons, or any of the other wrongs the AI industry has started that never stopped, you aren’t wrong. you don’t owe the people enabling and forgiving this stuff your kindness.

and since there was some very off stuff in this last post about how “morality isn’t absolute” in “the trans discourse”, let me make something clear

the AI-critical spaces I help operate are for trans and marginalized people above all else. they are spaces with no place for people who exploit other people.

in AI and all other “discourse”, if that’s what we’re calling the trans genocide being orchestrated by the far right, I am and always will be against the side doing the exploitation.

I cannot keep doing this but since today’s has seen a number of boosts:

the people asking you to stop forcing slop and ongoing damage onto their communities are not purity testing you. they’re asking for basic human decency.

no, disliking LLMs doesn’t make you… a list of right-wing stereotypes for leftists???? is this what we’re doing now?

please don’t use ChatGPT to self-diagnose or treat medical issues. seriously, please don’t do that or advocate for it.

it is of course rich as always that the examples for purity testing are shit like vegans being correctly harsh about the environmental impact of the meat industry (and I say this as someone who eats, medically, way too much meat) and not, for example, getting kicked out of a leftist group for expressing support for the people doing one of the ongoing genocides that use LLMs to both select targets and to hinder accountability for the resulting murders. you know, aka “kicking out an asshole”

anyway seriously please don’t use ChatGPT or any other LLM to self-diagnose or treat a medical condition. it’s such a dangerous thing to do that OpenAI warns you to not do it and they’re the type that has no qualms about their technology being used to kill people at random.

LLMs are designed to trick people, and an LLM has already tricked you into being a fucking asshole. if you trust an LLM’s output for medical issues, it won’t be long until it tricks you into being a dead fucking asshole.

@zzt Oh I saw that post boosted into my feed too. That was exactly my reaction.

Regardless of your stance on LLMs it is literally and provably unsafe to turn to them for medical advice and that's the very last usecase anyone should promote.

I didn't want to start an argument, but it made me so angry. Thanks for putting this into words

@brib of course! I have a couple more posts about it in this thread too: https://mas.to/@zzt/116232736363646650 which might be interesting. there’s so many dangers to using and advocating for LLMs like this, and it feels so bad to see people falling into patterns that I’ve seen have terrible consequences before.
[object Object] (@[email protected])

@[email protected] this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them. an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.

mas.to
@zzt Yeah, I've been following the studies on the matter fairly closely (yay bromism!). iirc the OP mentioned using LLMs for mental health advice too which also has a body count

@brib shit that’s even worse, we have very recent examples of the severe consequences of trusting LLMs for mental health advice.

I really do hope that OP finds the help they need that isn’t an LLM. I don’t want there to be more victims of this stuff.