My general dislike of AI writing has had a positive impact on how I read and listen to texts and scripts.

If I'm listening to a nature video, for example, and a sentence is empty of meaning or just illogical I turn that video off and avoid whoever made it.

Some of the things I've rejected probably weren't made by AI, but I don't see that as a bad thing.

My main issue with AI texts is I just find them kind of patronizing? You want me to sit and nicely listen but you can't be bothered to write?

@futurebird Well, I like to say „if nothing else, GenAI has teached me how little people pay attention to creative work.“

They don’t notice extra fingers or floating trees, they don’t notice badly written texts.
Working in media for years I basically knew that, but LLMs hammered home the point.

If that in turn leads to people actually looking at things, it’s hard to complain, though.

@orangelantern

The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:

"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."

????

I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.

@futurebird @orangelantern Seems directly related to the fingers thing, though. Wasn't the main issue causing finger deformation that the model would generate a section based off of what was next to it, so the smaller, intricate, and repetitive details get futzed up?

That sentence looks kinda similar. It took a bunch of sentences that said the same thing, picked the important bits of each sentence (balance, essential equilibrium, concentrations) and just...used all of them to be more important?

@futurebird That is because it is basically DadaDodo (https://www.jwz.org/dadadodo/), but instead of Markov chains it uses a neural network. If you claim it really understands anything, I'm going to need stronger proof than "it couldn't answer a question if it didn't understand it."
DadaDodo

@futurebird @orangelantern What you said upthread somewhere about missing out on incoherent human writing is painfully relatable. It's bad enough trying to figure out if you've misread something, missed something earlier, or if the author knew what they were talking about but made a mistake writing it, or just didn't know what they were talking about, or some combination thereof…

@futurebird @orangelantern

This is an essential failure mode of slop text: it's only recombining probable things that might come next, and it's never trying to *say a specific thing* - there's no model of the thing the text is *about*.

So there's no process by which the LLM can express that thing elegantly or tell if it has been expressed at all.

@petealexharris @futurebird Well, no, of course not. Because for that, after all, you need an actual intellect. A mind. The machine can’t tell if something is good or even makes sense. It just can sort data by context, which, technologically speaking, still is amazing and has lots of useful applications (even in creative work) but it’s not enough to create something on its own that is reliably useful or good in the same way a mind could come up with.

AI won’t replace artists and writers anytime soon. Which doesn’t mean idiot bosses who don’t see extra fingers or incoherent writing won’t use it to kill their jobs, though.

It’s already bad with english but infinitely worse with german, let me tell you. Every sentence this thing translates (because it does all generation in english and translates back and forth) is stilted and artificial at best, completely nonsensical at worst. And it completely falls apart when confronted with dialects. Nevertheless german television is killing off subtitling jobs to replace them with this tech that barely speaks english, nevermind low german or bavarian. Madness. And lots of magical thinking. If nothing else, this will be funny once they realize it doesn’t work.

While truly useful in many ways, this tech is getting oversold in a way that can only be described as lunacy, complete and utter madness. Because they know it can’t deliver the sales they promised their investors, so they double and triple down promising real magic their tech absolutely can’t do. This could all have been avoided with a little dose of realism instead of AI-generated hallucinations. Never drink your own Kool-Aid, I guess.

@orangelantern @futurebird I mean, you have to give that an incredibly creative reading to even call it anything resembling reality.
Is there salt water in cone snails? No
Is there fresh water in cone snails? No
It starts at completely wrong and gets worse from there.

@kevingranade @orangelantern

As person who knows only a smattering of facts about cone snails this sentence first made me feel like my reading comprehension was getting worse... then just disgusted.