thing that always gets me is seeing the computer scientists - who i know understand how it all works and should know better - yap about the possibility that the text prediction models they talk to at could be sentient.
it's like an ee looking at a wall outlet and then going Holy Shit is that a little guy with hopes and dreams and a rich inner life????
for reals, i'm serious about the environmental risk thing. like, i'm almost 100% that in a year or two we'll be seeing a whole cottage industry of "ai detox" self help books, and probably eventually some new form of therapy focused on regaining executive function & critical thinking skills
side note i'm calling it rn that the exact buzzword will be "ai detox"
@eclairwolf @ifixcoinops "reclaiming your autonomy!"
(that you gave away to a chatbox)
@eclairwolf I feel like this is just completely dispelling the illusion that "intelligence" is a thing that exists at all
It turns out, being good at one mental task (coding), even being EXTREMELY good at it, does not correlate at all with being good at others (recognizing whether a thing fucking sucks or not)
@eclairwolf This is something that genuinely and completely baffles me
If someone knows and understands how the technology works and how it's a combination of weighted sums, algorithms and training data... how can they begin to think it's sentient being?
By that definition, are mathematical equations sentient...?
@eclairwolf i keep thinking about this, because i'm surrounded by software people and professional computer-touchers, and we should know better. i think the problem is that we understand computers, not language.
language, and the process of actual minds using a language to communicate with each other, is more complex and more nuanced and more disproportionately impactful than people realize. when you combine "this simple thing is actually deeper than it's possible to understand without a post-graduate degree in the field" and "i can just think-hard my way to success in software so that'll probably work here", you end up with a bunch of Very Smart™ laypeople who think they're experts in "AI" because it runs on computers.
obviously linguists have been beating this drum for years, which kind of supports my point /cc @emilymbender
@ello I have never connected these dots but this makes perfect sense. I, a linguistics grad in software dev job, am more sceptical of LLMs than much more capable pure-CS developers that I work with.
(yet another example how the lack of humanities in engineering curriculum is a bad thing)