*Edit*: here at least, I am clearly not isolated!

Perhaps I am increasingly isolated in holding this position, but I have no interest in reading "AI"-generated slop.

I love reading.

I read people's blogs and toots and whatever *because people wrote them* and I want to read their own thoughts and opinions.

I buy books, and read numerous different authors. I like finding new authors, bringing new ideas, styles etc.

Same with "AI" images. I'd prefer no image at all.

@neil
Ok grandma đŸ‘” so you have any idea how privileged you sound? If you hate tools, why even touch a book in the first place? Because thats what AI is. A tool. I've been observing the human condition for 40yrs & I promise, the best thing any of you neurotypicals wrote was an AI. You think people who have vision but no tools deserve less? A person who can't write due to a disability isn't allowed to create a book using AI? Go stew in your nostalgia-core while we live in the current decade.
@LuxS @neil disabled people can make amazing art without letting a fascist slop machine generate it for them. fuck off, ableist.

@lizzy @neil

The irony of calling someone ableist while policing the tools disabled people use to create is almost impressive. Accessibility technologies have always changed art. Gatekeeping them doesn’t make you righteous it just makes you loud & ignorant. Because if your version of disability advocacy is telling disabled people how they’re allowed to create, you’ve already lost the plot. Maybe sit this one out.

@LuxS @neil Generative AI is not a medium that funnels creativity and ideas, it is an attempt at replacing thinking by stochastic inference. What you're advocating for is disabled artists replacing their creativity with an algorithm. What you're implying is that their creativity is worth nothing.

@lizzy @neil

The real irony? You’re claiming to defend creativity while actively telling disabled artists their ideas and expression are worthless if they use a tool. Generative AI is just that, a tool. Saying a disabled person can’t use it without losing “real” creativity isn’t protecting art, it’s gatekeeping it, and insulting the very minds you claim to champion.

@LuxS @neil Lmao, you didn't comprehend what I said at all. Let me guess? ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86
@lizzy @neil @LuxS It's clearly an astroturfing bot, but it might be a self-hosted LLM deployed by someone who's drank too much of the "AI" koolaid.
@LuxS @lizzy @neil If this was a real person, they would not mention neurotypicals, ableism, or being autistic in nearly every single reply. Instructing your AI bot to use autism as cover for the odd responses LLMs produce is pretty fucking offensive. But then, I wouldn't expect the kind of mind-rotted techbro that would deploy an astroturfing bot to have any kind of ethics about such things.

@StarkRG @lizzy @neil
All that jibber jabber condensed into this:

You can’t be a real person because your arguments are consistent and intelligent & I don't understand anything your saying.

@LuxS @lizzy @neil Not really, no. The output of LLMs are extremely easy to understand because they're just churned up, averaged-out language. Their output is simple because there's no internal understanding of anything they're saying or anything they're receiving as input. That's what makes them so easy to spot.

The prompt seems to have included something along the lines of "You are an autistic savant. Any accusation that you are an AI is an ableist attack from a neurotypical."

@condret @neil @LuxS @lizzy Understanding requires knowledge and the capacity to apply that knowledge. It is comprehension of the meaning of data. LLMs, Large Language Models, contain no knowledge, they are just data. They are models of language, statistical representations of the relationships between words. They have no judgement, they have no intelligence, they do not have the capability to understand meaning. They are word predictors, and nothing more.
@StarkRG @condret @neil @lizzy
AI systems currently assist in cancer detection, pathology analysis, cardiac imaging, drug discovery, and sepsis prediction in intensive care units. They routinely catch abnormalities that human eyes miss.
If a patient’s tumor is detected early because an AI-assisted imaging model caught what a human eye missed
 does that mean their life is worthless? Because by your definition, that life was saved by “meaningless statistics.”
@LuxS @condret @neil @lizzy You're confusing LLMs, chatbots like ChatGPT and Claude, with the broader topic of deep learning transformers. Transformers absolutely do have good use cases, LLMs do not. Transformers are like industrial machines, extremely necessary for certain jobs, but not at all useful for the vast majority of people. LLMs are a toy version, more like an easy bake oven, but it sucks and never fully bakes anything.

@StarkRG @condret @neil @lizzy

People have been using computational language tools for serious intellectual work for decades. Stephen Hawking didn’t manually type every word of his lectures or books. He relied on predictive text and speech-generation software to construct sentences and communicate complex scientific ideas. Especially during live interviews & lectures.

@StarkRG @condret @neil @lizzy

No one dismissed that technology as a “toy.” It was recognized for what it was, an interface that helped translate human thought into language.
Modern LLMs are simply a far more advanced evolution of that same idea, tools that assist with drafting, structuring, translating, and exploring language.
Calling them toys because the public can use them is like calling calculators toys because mathematicians use them too.

@StarkRG @condret @neil @lizzy

Accessibility and intellectual tools don’t become trivial just because they’re widely available.
The strangest part of your argument is that you praise transformers while dismissing LLMs when LLMs are literally built on transformer architecture. Saying transformers are valuable but LLMs are useless is like saying engines matter but cars don’t. It's ’s technically incoherent and shows your fundamental misunderstanding of how the technology actually works.