Stop saying “artificial intelligence”. (And “neural networks” too.)

Be more specific. Say “reinforcement learning”. Say “generative modelling”. Say “Bayesian filtering”. Say “statistical prediction”.

These are incredibly useful tools that have nothing to do with “intelligence”.

And say “model trained on plagiarised data”.

Say “bullshit generator”.

Say “internet regurgitator”.

These are also nothing to do with intelligence, but they have the added bonus of being useless, too.

Some banging euphemisms for LLMs in the comments.

I am very partial to the original, "stochastic parrots", by Gebru, Mitchell et al.

This is fun, but I’m tired and I do not want to wake up to a billion notifications.

Muting this thread. Enjoy, peeps.

@samir incidentally, there was one thing I did not get until I saw a talk by I think prof. Bender, who mentioned that the "parrots" do not refer to the text generators but the generated pieces of text themselves.

So, "stochastic [pieces of text]", not "stochastic [text generators]".

And yes, I am a lot of fun at parties! 🦜

@rysiek I think I missed this nuance. If you find the talk, would you mind sending it to me?

@samir
I'm gonna vote for one of this.... or both

```
“bullshit generator”.
“internet regurgitator”.
```

@samir I'm a fan of "Confabulation Engine"
@samir how about Fabricated learning? Army terms like humint all have some hybrid terminology but the point here is that it needs to be recognized as machine made.

@samir

“bullshit generator”

this is the truest statement I have ever seen in my entire life

@samir @himay
It's frustrating, because labelling all this stuff as "AI" just lumps these incredibly wasteful grifts like ChatGPT in with useful machine learning algorithms that can be efficient and quite good at their specific tasks.
@TheGreatLlama @samir @himay
I see "#AI" as existing in 2 categories: General Public & Specialised.
The 1st has no guarantees of quality or security & is fine for e.g. translating your Thai mother-in-law's Happy Anniversary message. The tool is in effect the master.
The 2nd is a specialised tool used by an expert in a particular field who fully understands its (quality/technical/security) limitations as well as its capacities and potential. E.g. reading X-rays. The expert here is the master.

@Quantillion @samir @himay
Well, living in the US, I find your hypothetical example in the second category a bit frightening because I can easily see our healthcare system dispensing with the expert and treating the tool as infallible. But yes, when treated properly by people who understand its limitations, that's what I consider the useful stuff.

The problem is that AI is nothing but a marketing buzzword, that's why the definitions are uselessly vague. On any given day, it means whatever the marketers choose to hang upon it.

@samir like this, also stop saying “it hallucinated” which implies it’s having a bad day and is perfectly capable and just say “it’s wrong” or “it can’t do that”
@samir I'd also accept "spicy autocomplete"
@samir
I really like "wrong answer machine."

@samir

"synthetic text extruders"

"mansplaining as a service"

@samir i think this is a good idea. take "bayesian filtering" for example: this is really useful to people who work in the ml(? what would be a more correct word for that) field because it tells them how it works, but it means nothing to non-technical people, so it won't give them the wrong idea like "artificial intelligence" would.

@samir Despite what appears (at least from where I'm standing!) to be a broad expert consensus that LLMs are horribly unreliable, I'm still seeing people point to their selective usefulness to those who understand their limitations.

Problem of course is that the vast majority of users *don't* understand, and are seduced by the illusion of competence that these generative models present. So the dangers *vastly* outweigh the use cases.

Good to see the EU heavily regulating "AI" across Europe.

@samir We need these terms to catch on so the people investing in companies that slap "AI" on everything get confused and bored and move on to the next shiny new thing.
@samir "algorithmic pseudo-intelligence" is what we are being using in texts
@samir i dont see plagiarism. its never 1:1 copy. if that would be the case, any book you read would be plagiarism. in science you should make a proper citation, so I think thats the problem, a lack of transparency of that went into the model. I would crawling the internet not call pagiarism.

@samir

Attacking the source is better than renaming the source.

Everybody knows what it IS.

Everybody knows what it's CALLED.

Attack what it's called. Then the negative and positive euphemisms get evaluated and/or rejected on their face—based on their common perception.

If you think a negative euphemism will affect anybody here, then you are living in your own specific bubble.

@_chris_real This is silly, IMO. If I attack “artificial intelligence”, I am also attacking spam filtering, protein folding breakthroughs, facial recognition, and pretty much any modern statistical research.

I would like to attack the bullshit machines, thanks.

@samir and please stop imputing agency to the tools. They don’t do anything. People use them to do things.
@samir Data "Mathsticators" (like chewing)
LBT do this no less because there are some tools I *want* to work with and there are some tools I want to avoid like the plague and they are currently ALL being called "AI". It's to the point where I've dropped the phrase off my CV even though some of my programming skills technically count just so I stop getting ads for bullshit-generator jobs.
@samir instead of artificial intelligence, say machine learning.
@balasubramanium Better, but I would prefer these things to be broken down into their use cases, as they have different merits. (And you don’t need to burn the earth for many of them.)
@samir I call it Mansplaining as a Service or the Plagiarizing Machine.
@samir You forgot linear algebra execution engine.
@samir So I have to say "Generative Modelling" then.

@samir So, **TRUE**! I get so angry when someone just comes up to me and says he/she's using AI to do something.
Or can we implement AI to achieve some basic tasks such as filtering and checking parameters?

**ENOUGH!**

@samir
I like bullshit generator we should popularize BG to replace AI
@samir How about Bias Automation Machine?
@samir what if someone made "bullshit generator" but only from cc0 data?
@hacknorris Then it would be a slightly more ethical bullshit generator, I guess.

@samir Yep. AI is, like fusion power, 5 years away. It might be 5 years away for the next 100 or 200 years or just 5 years. We don't know, it will take a breakthrough, and breakthroughs cannot be planned or predicted.

What's happening here is that a bunch of giant tech firms are desperate to find the next big thing and they are terrified of missing out on AI, if it happens to be the next big thing. I predict that AI will be like Web3, only there will be more money and jobs lost.

@samir

+1 for bullshit generator!

@samir We had that long before. Say "element", not "tag" :-)
@samir -- Internet Theft Machine?
@samir I'm partial to "glorified autocorrect".
@samir I just wanna share how I "convinced" cbatGPT about weed physics and that derivative of 69 is 420 and vice versa.

@samir

Same with crypto. Say high volatile digital currency.

@samir Applied Functional Approximation. Instead of neural networks or deep learning.
@samir MoPoShiRe : Most Probable Shit Regurgitator
@pmartin @samir what I said, AI ist der Über-Statistikus.

@samir I agree - "AI" has quickly become a gross, enshittified word.

I feel the same way about people blaming "the algorithm". It makes it sound like some inhuman force without thoughts or responsibilities. Say "facebook" sucks, say "mark zuckerberg" sucks, not "the algorithm" sucks.

@samir it is an algorithm, a statistics nerd at the highest level, der Über-Statisiker. It is not intelligent indeed!
@samir I occasionally throw around the phrase “artificial stupidity” not necessarily because of the quality of the slop that is output by text generators, but because the way LLMs “summarise” a piece of text is remarkably similar to how human memory seems to work.
@samir Human brains aren’t exactly equipped to store memories like files on a computer. They are, however, excellent at pattern recognition and storing the most important information to you. When you recall a specific memory, your brain is reconstructing it based on that important information and filling in the rest of the details with familiar patterns—or, in AI terms, “hallucinations”. Humans actually have terrible memories—and now we’ve made machines replicate that, too.
@samir Your alternative names don't seem to cover the "#AI" that runs the nations that oppose me in #empire-building #games such as #Freeciv!
@samir I call it AI degenerative.

@samir

I like to call them 'confabulation engines' and 'lying machines'.

@samir let’s call it what it really is. Mass data mining in real time. I think SalesForce called it, Their job. It isn’t smart. Doesn’t know all. The information is fed into the same statistical models that have existed since math // Moby was a minnow. As such it doesn’t t need $600B of our tax dollars to do what a few creative minds did in a garage in China for $6M. Oh you poor tech bros! Have we exposed the biggest grift since Y2K?