https://thewalrus.ca/the-new-york-times-got-caught-using-ai-hallucinations-in-its-reporting/

It's not lijke NYT had any credibility this century anyways, but I cannot overstate how much I hate what LLMs have done to folk epistemology. Butlerian Jihad is the *moderate* position.

"ChatGPT uses this one simple trick to derail critical thinking!"

#LLMsAreNotAI

RE: https://neuromatch.social/@jonny/116549136562518939

This thread. Haven't laughed so hard all week.

#LLMsAreNotAI

$LC_DEITY $CURSE

Molotovs over the doorstop *is* the moderate position:

https://www.thatprivacyguy.com/blog/anthropic-spyware/

There is no evidence to assume good faith here, and strong evidence of bad.

#LLMsAreNotAI

Anthropic secretly installs spyware when you install Claude Desktop — That Privacy Guy!

Anthropic's Claude Desktop silently installs a Native Messaging bridge into seven Chromium browsers, including browsers Anthropic's own documentation says it does not support, and browsers the user has not even installed.

That Privacy Guy!

Unreliable Pot, meet unbelieveable kettle:

https://blog.mozilla.org/en/mozilla/ai/microsoft-copilot-ai-user-choice/

"The Copilot rollout followed the same playbook we've come to expect from Microsoft: use automatic installs, physical hardware, and default settings to force behaviors."

/me points at the existance of an AI kill switch in firefox, instead of an AI enable switch

#DefaultsMatter #LLMsAreNotAI

Old habits die hard: Microsoft tries to limit our options, this time with AI | The Mozilla Blog

Microsoft recently announced it’s pulling back Copilot from several of its core Windows apps — Photos, Notepad, the Snipping Tool, and Widgets. Rol

It's nice to be reminded that we're not the last to consider ethics. What we do matters, and what we accept as normal also matters.

https://www.garfieldtech.com/blog/selfish-ai

#LLMsAreBullshit #LLMsAreNotAI

Selfish AI | GarfieldTech

via @ubi

https://ecoevo.social/@ubi/115168223686753324

A sad and traumatic story of someone rescuing themselves in time to avert horrible consequences to their reputation; the sadness is because their (ex)co-author, who should be a responsible academic, committed fraud and stupidity.
#LLMsAreNotAI

ubi (@[email protected])

Bloody hell. I just noticed that a coauthor turned in a manuscript containing a halucinated reference. I know that the reference is fake because it has my name as the first author, but I never wrote that paper or published in that journal before. It's shaped like a paper that I could have plausibly written, but it's not the paper that I wrote about on the same topic on that year... Not sure what to do about this. #AcademicChatter

ecoevo.social

Doctorow is both a hard-working and gifted writer, but every once in a while there's a piece that knocks it out of the park and into the next town, and today is our lucky day:

https://pluralistic.net/2025/07/15/inhuman-gigapede/#coprophagic-ai
"...a system that sends high-flying companies into a nosedive the instant they stop climbing."

Google's only options are to fuck everyone over even more than they have so far and/or die trying.
#LLMsAreNotAI

Pluralistic: When Google’s slop meets webslop, search stops (15 Jul 2025) – Pluralistic: Daily links from Cory Doctorow

@[email protected] can I do to encourage your resistance to temptation? Some reminders, perhaps.

There are no benefits - any illusions of "immediate" come from failure to understand what is happening.
Statistical token generation is not AI.

Code is already FUCKING HARD to get right, do you honestly expect to accurately vet or debug code that is literally formed to look plausible despite not containing any elements of truth except by accident?

#LLMsAreNotAI

The pefect explainer for how so many victims fall for the LLM con:

https://softwarecrisis.dev/letters/llmentalist/

eg. "A popular response to various government conspiracy theories is that government institutions just aren’t that good at keeping secrets. Well, the tech industry just isn’t that good at software. This illusion is, honestly, too clever to have been created intentionally by those making it."

#LLMsAreNotAI

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
I am going to try to read an article about someone who tried programming something with the use of LLMs.
I fear I may regret this.
#LLMsAreNotAI