That study that shows that developers think they save 20% of time when using coding LLMs, but they’re actually taking 20% longer is so funny.
Not only does the AI bullshit the developers—the developers also bullshit themselves.
@thomasfuchs I can relate to this sentiment, but the paper refutes that "perceived ethicality" (or "perceived capability") explains the discrepancy.
I imagine just asking people how they think LLMs work. Those who answer 'it's a bullshit engine built off an empire's worth of stolen goods' would, by my standards, receive top marks.
@jedbrown I wonder if higher literacy rather correlates with “if I want something done properly I’ll do it myself, and I know how to do it”.
@jedbrown @thomasfuchs Yeah, I think its really just: The more you know about AI, the more you know its present limitations. And you are less suspectible for the hype of future AI and more realistic about what to expect. Which isn't bad at all, even quite promising. But far away from threatening the experience of skilled workers.
Maybe exceeding the management skills of certain CEOs.
@winterayars @jedbrown @thomasfuchs on the other hand, it was also accepted the robots weld cars, instead of letting humans do that. Did not stop my union to become more powerful then ever since. Or resulted in a smaller workforce.
Technology can also lead to more meaningful work and better working conditions. Maybe we should not let CEOs and libertarians decide, what AI can be good for?
@urwumpe @jedbrown @thomasfuchs Machines saving on human labor is amazing.
Machines being used to cut everyone out of profit, condensing power and control of society into fewer and fewer hands, is not so great.
@jedbrown @winterayars @thomasfuchs Sorry, but I am not that religious. I don't believe in miracles, not for me, and not for the bad guys.
Can't you find a more grounded way how AI can separate and discriminate people, if put into the wrong hands?
@urwumpe @jedbrown @winterayars You are committing a logical fallacy in presupposing that “AI” actually exists. It doesn’t.
LLM transform tokens into other tokens; without knowledge, intelligence, insight, creativity or even awareness of what it is doing.
It categorically cannot have broad applications.
@thomasfuchs @jedbrown @winterayars What do you think, intelligence actually is, on the lowest level? Your brain is made of the same building blocks as the brain of a shark. But you would never say a shark is intelligent compared to a human, would you?
AI has maybe just reached the intelligence of a parrot. Not impressive to a human.
But AI is no intelligence bound to a body, not a product of nature. It can be unexpected, exotic, alien... but its a kind of intelligence.
@urwumpe @thomasfuchs @winterayars No, that is a religious position you are taking.
And note that "stochastic parrots" was not referring to the animal, but the verb "to parrot" (as the authors have clarified personally).
Attached: 2 images This appears in Journal of Marketing and so some of the writing is 🙃 dystopian, suggesting that effective strategy is to intentionally promote misconceptions and keep the low-AI-literacy audience ignorant. I prefer to read that as a warning rather than an instruction manual. https://doi.org/10.1177/00222429251314491
@thomasfuchs I commend the restraint you showed when composing your post.
However, I suggest an edit as you seem to have missed out a couple of "fuckings".
I think that your last sentence could be amended in this way:
"...while feeding their fucking money into the greedy fucking throats of fucking billionaires."
@the_wub It's the last couple of sentences of their abstract in particular that incited my outburst of swearing:
"These findings suggest that companies may benefit from shifting their marketing efforts and product development toward consumers with lower AI literacy. In addition, efforts to demystify AI may inadvertently reduce its appeal."
In a surprising pattern, we noticed that people who understand a pattern are less likely to be surprised by it.
I believe currently, most AI use is fueled by one of two things:
a) the feeling of people who don't understand the complexity of the field they are trying to do something in that they can now and
b) the fear of missing out
The naivety of academic research is continually surprising.
@hugoestr @Tom_ofB @thomasfuchs
Not for me : I am more admirative of the magician's skills when I discover the trick.
AI is a a giant scummy planet damaging money sucking scam.
Of course the persons who are not making money with it and understand how it works are skeptical or horrified by its intrusive growth and integration.
Present AI is definitely not Intelligent, more Bullshit.