seeing incredibly dull people talk about shit like how chatboxes are going to destroy society cause they're too smart, or how statistical word predictors are So Good And Capable Now and i'm starting to think that slop machine usage should be considered an environmental hazard along with lead and cadmium
are they actually better or has your exposure to slop exceeded 1 character per million?

thing that always gets me is seeing the computer scientists - who i know understand how it all works and should know better - yap about the possibility that the text prediction models they talk to at could be sentient.

it's like an ee looking at a wall outlet and then going Holy Shit is that a little guy with hopes and dreams and a rich inner life????

for reals, i'm serious about the environmental risk thing. like, i'm almost 100% that in a year or two we'll be seeing a whole cottage industry of "ai detox" self help books, and probably eventually some new form of therapy focused on regaining executive function & critical thinking skills

side note i'm calling it rn that the exact buzzword will be "ai detox"

@eclairwolf I've been comparing slop stuff to asbestos. As in, we put it in everything right now, and then in the future, there will be good money in removing it.
@ainmosni I compare generative AI to cars. A technology that is interesting and that can be useful *when used in moderation*, but if you subordinate everything to it and make the whole world dependent on it, you create an unimaginable dystopia
@eclairwolf
@patrislav @ainmosni @eclairwolf my thinking was: don't car lovers also anthropomorphize cars. And I'd argue many of them are car mechanics.

@eclairwolf @ifixcoinops "reclaiming your autonomy!"

(that you gave away to a chatbox)

@eclairwolf I feel like this is just completely dispelling the illusion that "intelligence" is a thing that exists at all

It turns out, being good at one mental task (coding), even being EXTREMELY good at it, does not correlate at all with being good at others (recognizing whether a thing fucking sucks or not)

@eclairwolf ok but plug sockets are just little guys if you zhink about it
@mynotaurus
I just want them to be happy.
@mynotaurus the most Little Guys of all time,,, i want mine to be happy
@eclairwolf actual computer scientists cringe at ideas of "AGI" and "this set of NN coefficients is sentient".
It is the CEOs / techbros at the top of the ponzi scheme, and their disciples, who push those delusions.
@patrislav @eclairwolf unfortunately nothing stops actual computer scientists from becoming disciples of techbros — many actual computer scientists are into these things because they believe the slop machines will eliminate the need for them to write unit tests or documentation

@eclairwolf This is something that genuinely and completely baffles me

If someone knows and understands how the technology works and how it's a combination of weighted sums, algorithms and training data... how can they begin to think it's sentient being?

By that definition, are mathematical equations sentient...?

@eclairwolf i keep thinking about this, because i'm surrounded by software people and professional computer-touchers, and we should know better. i think the problem is that we understand computers, not language.

language, and the process of actual minds using a language to communicate with each other, is more complex and more nuanced and more disproportionately impactful than people realize. when you combine "this simple thing is actually deeper than it's possible to understand without a post-graduate degree in the field" and "i can just think-hard my way to success in software so that'll probably work here", you end up with a bunch of Very Smart™ laypeople who think they're experts in "AI" because it runs on computers.

obviously linguists have been beating this drum for years, which kind of supports my point /cc @emilymbender

@eclairwolf @emilymbender @ello
yes! this is what frustrates me so much. long before LLMs one linguistic theory was that people generate sentences one word at a time — it was demonstrated to definitely not be true over a century ago.
yet this is exactly how LLMs work. this implies that no matter how impossibly large the model is, it can't simulate the behaviour of a human, since its modus operandi is definitionally not that of a human.
@eclairwolf @emilymbender @ello
and by the way, this absolutely fundamental design choice is exactly why you get the sort of emergent behaviours such as ChatGPT "losing its mind" when you ask it for a seahorse emoji. because ChatGPT generates responses one token at a time, and those tokens are generated in the order "Sure", "the", "emoji", "for", "seahorse", "is", followed by not a seahorse emoji token, because that doesn't exist.
a human wouldn't fall into that trap to begin with.

@ello I have never connected these dots but this makes perfect sense. I, a linguistics grad in software dev job, am more sceptical of LLMs than much more capable pure-CS developers that I work with.

(yet another example how the lack of humanities in engineering curriculum is a bad thing)

@eclairwolf
Here in Denmark, wall outlets smile at you.
@leeloo they're so friend shaped i love them
@eclairwolf Wall outlets, no, but > 1kV and it does have a mind of its own.