Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@[email protected]

Even talking about "text", in the context of #LLM, is a subtle anthropomorphization.

Text is a sequence of symbols used by human minds to express information that they want to syncronize a little with other human minds (aka communicate).

Such syncronization is always partial and imperfect, since each mind has different experiences and informations that will integrate the new message, but it's good enough to allow humanity to collaborate and to build culture and science.

A statistically programmed software has no mind, so even when it's optimized to produce output that can fool a human and pass the #Turing test, such output hold no meaning, since no human experience or thought is expressed there.

It's just the partial decompression of a lossy compression of a huge amount of text. And if it wasn't enough to show the lack of any meaning, the decompression process includes random input that is there to provide the illusion of autonomy.

So instead of "the AI replied" I'd suggest "the bot computed this output" and instead of "this work is AI-assisted" I'd suggest "this is statistically computed output".
What is Informatics?

An essay on the essence of Informatics.

Giacomo Tesio

@giacomo @gabrielesvelto

I do like the ,"this is a statistically ...".

I have an intense dislike of active verbs being applied to LLM output. Yes, the program ran, and there is output, but there is zero intention behind it.

@[email protected]

#LLM output lacks intention, awareness or meaning.

It's designed to fool human mind by exploiting the statistical patterns that humans use to syncronize (aka communicate) information they hold in their minds, but there's no mind there.

No intelligence, just malicious use of statistics.

@[email protected]