Ever since playing with ChatGPT, I've become sensitized to the way false rationality sounds ... there's a particular vibe to what is basically coherent nonsense. And now I've started to notice when people do it too. I get this crawly ChatGPT feeling when somebody is obviously making up an "authoritative" answer to a question they know nothing about. #AI #chatgpt #psychology
@annaleen They don't call ChatGPT "mansplaining as a service" for nothing.
@johl haha yes!

@annaleen @johl

I guess ChatGPT almost by definition is the thing that consistently into Dunning-Krugering itself.

@johl

Ha! For quite a while now I've been calling AI models "Cognitive Bias as a Service", but for ChatGPT in particular "Mansplaining as a Service" is particularly apt.

@johl @annaleen
"word salad as a service"
@MrLee @johl @annaleen *merkle tree word salad as a service*, but too long for maas
@annaleen Isn't that about the same thing as mansplaining? Can't we just call these sentence-finisher AI's "Mainsplainbots?"
@annaleen Oh yes. Perhaps these LLMs are more human after all than some give them credit for... *but not in that way*.
@annaleen This reminds me of when I worked electronics retail. I would generally try to be as honest as I knew to be about specifications and capabilities, even if it meant using words like maybe and probably. And I was astonished at how many people called me out on those terms and wanted me to speak in absolutes. Other sales associates would talk in absolutes, and your toot reminds me of the times I would overhear them talking to customers
@fskornia @annaleen People seem so weirdly unwilling to accept uncertainty or limits to their knowledge, it's weird.
@lispi314 @fskornia @annaleen
Well, isn't it just as much for the same reason OP is talking about though. People call you out if you are communicating uncertainty to any degree and thus will dismiss you even if you have some understanding. People who are literally experts on a subject because they don't sound certain enough often ironically due to their expertise get dismissed for this reason

@runefar @fskornia @annaleen Right, that is part of it yes, I didn't quite express my thoughts properly.

The unwillingness to accept such limits in others (on top of oneself) is weird.

I'll much sooner trust someone who admits they don't know the answer to a question (or aren't certain about it), than one who confidently bullshits and barely apologizes when called out.

I'll note that some cultures have enough of such a relation to information certainty they have specific grammar for it.

@fskornia @lispi314 @annaleen @runefar

It’s definitely a cultural and dare I say gender issue- also establishing power dynamics

@fskornia @annaleen I get something like this sometimes - people practically demanding that I opine on something I don't really know well.

"I read the same news article you did. There's no reason to suppose I know more detail than you do. Why do you act like I'm hoarding information?"

@annaleen This is how I perceive many tech and business descriptions of their innovations — a string of buzzwords that sound impressive but say little.
@annaleen Honestly, if a generation of people learns to spot irrational arguments at 20 meters, that's not the worst thing to happen, especially in an age of rising fascism.
@dymaxion I was wondering about that ... at the very least, it might allow us to name that particular brand of bullshit/propaganda. "Oh it's a ChatGPT argument."
@annaleen I really think media literacy is kind of our only hope here. Copyright law may make a lot of these systems at least less accessible, but at the very least state-level propaganda operations will have access to them, so we're still stuck with needing to educate folks to defang what they spew.
@dymaxion Totally agree.

@dymaxion @annaleen

I’ve thought similar - but afraid it’s gonna be a painful path first.

My techno utopian wish is that in some future - it frees us up to actually focus on the emotional human side of connection - but I’m skeptical. Shiny object syndrome can dupe us for decades.

@dymaxion @annaleen

It would be nice if one generation or another would get it right. IMHO, there are three things people need to master:

1) People need to be willing to put the truth first. That means being willing to say of one's favorite public figures, "Wait a second, that was a load of crap". That means being willing to say of one's pet philosophies, "oh damn, that pretty much exposes the whole thing as a lie or at least severely compromised". People who are willing to do that are, unfortunately, rare; it requires putting truth first, and very few people do that. (If you're of any particular political stripe, you're probably thinking, "yeah those guys on the other side are terrible at that". They may well be; but a lot of people on your side are terrible at it too. Don't be one of those people or you are already damned.)

2) There are any number of guides to propaganda techniques online. Study them so you can recognize them in practice.

3) Logically rigorous arguments matter: statements need to be supported by facts that are applied in a non-fallacious manner. Fortunately, Mr. Spock is here to help, like here when he's explaining Bulverism to a couple of crewmembers:

https://www.youtube.com/watch?v=rGtIGA0c01s

STAR TREK Logical Thinking #37 - Bulverism (Identity Fallacy)

YouTube
A history of FLICC: the 5 techniques of science denial

In 2007, Mark Hoofnagle suggested on his Science Blog Denialism that denialists across a range of topics such as climate change, evolution, & HIV/AIDS all employed the same rhetorical tactics to sow confusion. The five general tactics were conspiracy, selectivity (cherry-picking), fake experts, impossible expectations (also known as moving goalposts), and general fallacies of logic.

Skeptical Science
@annaleen
Does this mean chatGPT will replace all the mansplainers?
@citykidPVD yep. all the mansplainers will be out of work tomorrow
@annaleen @citykidPVD look, you need to understand that <blah blah blah blah >
🤭
@annaleen it perfectly fits what Frankfurt (On Bullshit, 1986) described: "although it is produced without concern with the truth, it need not be false".
@tpoisot and on the flip side, Paul Linebarger argued back in the 1940s (Psychological Warfare) that propaganda always contains truth.
@annaleen @kissane AI is a mirror for humanity, and we’re not liking what we’re seeing
@Techronic9876 @annaleen @kissane
To be honest, this is a side that needs to be talked about more when people talk about different biases. An issue I have seen is activists rightfully talking about ai biases, but ironically their solution is one that doesnt fix a bias and instead inflicts a societal bias it into the AI yet they often see it as removing the bias. In some cases it will be, but in others that is us recategorizing the data to fit the societal context ironically giving it that bias

@annaleen Well, I am an expert certified by the American Association of Mastodon Repliers, and I think your supposed "crawly ChatGPT feeling" is actually a symptom of asphonia sunsoria, which is a dangerous medical condition as recognized by the Food and Water Administration.

Are you getting that feeling now?

@annaleen Excellent, you're learning! ^_^

This may seem tangential, but キャロル&チューズデイ「kyaroru & chūzudei」Carole & Tuesday, a sidequel to カウボーイビバップ「kaubōi bibappu」Cowboy Bebop, delves into AI generated pop music. Like all great SciFi, it touches upon deeper subjects!

It is maybe (certainly) a lot more challenging for most (if not all?) to discern: because so few are musically literate, let alone cognizant with applying AI to a field which is not intrinsically linguistic. (*weeps* in Turing lore)

@annaleen
Anti vaxers, climate change deniers, Flat worlders, and Fox News, have been doing that for at least … forever.
@annaleen this is why I think the best term for it (in any context) is just "bullshitting". Plausible seeming nonsense pronounced with confidence is just bullshit. It's bullshit when it's a guy who just read two Wikipedia articles and decided he's an expert on epidemiology or geopolitical strategy and it's bullshit when it's a LLM stringing together tokens probabilistically without any ability to conceptualize anything at all including the topic at hand or the idea of "truth"
@annaleen plus everyone understands bullshitting intuitively
@mrcompletely @annaleen The problem is, we seem to have some kind of evolutionary baggage predisposition to believe our own bullshit.
@annaleen oh man, in college we used to call this “talking a line” — and I feel like now that you’ve said it, I will never be able to think about chatbots doing anything else
@alexismadrigal we called it "answer syndrome"!
@annaleen @alexismadrigal Had a conversation the other night about jobs that push false confidence as a virtue, and it got me wondering if chatbots will raise the social value of whatever is the opposite of this. People being very clear with themselves and the world about what they know and what they don't. Seemed almost too wishful to let myself think it, but I guess it can't hurt to dream.

@misc @annaleen @alexismadrigal "jobs that push false confidence as a virtue" are exactly the "real work" under threat from these LLMs

But the decision-makers of "who to layoff/fire/fund" are holders of at least two classes of these jobs -- venture capital and many c-level executives

@alexismadrigal @annaleen yes. See also "blowing smoke," which is the polite form of "blowing smoke up your ass," which has a whole fascinating origin story all its own.
@annaleen That's what Instagram comments seem like; a vast wasteland of "authoritative" sounding statements.

@annaleen I follow a lot of teachers on TikTok, and some fret about "how will we ever teach kids to write if they can use ChatGPT?", but the better ones (IMO) are saying, "We've got to figure out how to use these chat apps to teach kids critical thinking."

Wouldn't it be great if this technology finally was the forcing function to focus on critical thinking in high school?

@khenniss I really like this utopian vision

@khenniss @annaleen

Me too! I think the youngins have good blush it detectors - it’s those out of school I worry about [in the short term]

@debs @annaleen

In my experience, today's high schoolers def have plenty of healthy skepticism, but I'm not sure that naturally maps to critical thinking, which I think still benefits from guidance and exercise in the beginning.

FWIW, I've heard that critical media education is a standard part of the curriculum in Germany. That would be a good place to start, I think. Critical media literacy.

And yes, the folks who never got critical thinking in school, and are now out ruining everything in righteous ignorance are definitely a major concern.

@annaleen @khenniss

Super good point that skepticism does NOT equal critical thinking

@khenniss @annaleen I read about a teacher who generated essays using chatgpt and had the students fact check and criticize them.

@edyoung @annaleen Yeah, I saw another teacher who was considering an assignment where kids could (1) do a critical analysis of a ChatGPT essay on Hamlet, or (2) improve the ChatGPT essay on Hamlet.

Makes all manner of sense to me.

@khenniss @annaleen The fear of course is that the deficit of critical thinking education means our society is a naive host in the face of a novel memetic pathogen.
@annaleen who knew that a chatbot might help finely tune the human bullshit sensory system.
@annaleen
This comment about ChatGPT sounds a little like playing Balderdash!
@annaleen Rather a useful skill, if nothing else. 🙄
@annaleen I know these IRL AIs you speak of. ;)