Stop saying “artificial intelligence”. (And “neural networks” too.)

Be more specific. Say “reinforcement learning”. Say “generative modelling”. Say “Bayesian filtering”. Say “statistical prediction”.

These are incredibly useful tools that have nothing to do with “intelligence”.

And say “model trained on plagiarised data”.

Say “bullshit generator”.

Say “internet regurgitator”.

These are also nothing to do with intelligence, but they have the added bonus of being useless, too.

@samir @himay
It's frustrating, because labelling all this stuff as "AI" just lumps these incredibly wasteful grifts like ChatGPT in with useful machine learning algorithms that can be efficient and quite good at their specific tasks.
@TheGreatLlama @samir @himay
I see "#AI" as existing in 2 categories: General Public & Specialised.
The 1st has no guarantees of quality or security & is fine for e.g. translating your Thai mother-in-law's Happy Anniversary message. The tool is in effect the master.
The 2nd is a specialised tool used by an expert in a particular field who fully understands its (quality/technical/security) limitations as well as its capacities and potential. E.g. reading X-rays. The expert here is the master.

@Quantillion @samir @himay
Well, living in the US, I find your hypothetical example in the second category a bit frightening because I can easily see our healthcare system dispensing with the expert and treating the tool as infallible. But yes, when treated properly by people who understand its limitations, that's what I consider the useful stuff.

The problem is that AI is nothing but a marketing buzzword, that's why the definitions are uselessly vague. On any given day, it means whatever the marketers choose to hang upon it.