It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.
@Gargron agreed. It's just LLMs, the professional nonsense generators
UPDATED: Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI). – Quinta’s weblog

@Gargron
Nobody says #ArtificialSmartness.

Just sayin’

@paninid @Gargron How about #ArtificialParrot ? Actual parrots are probably smarter. #ArtificialRegurgitator ?
@paninid @Gargron Research goal:
Develop a General Artificial Regurgitator (Global Linguistic Engine) #GARGLE .
The test for success would be pitting the model against Boris Johnson, an interrogator should not be able to distinguish the two. One benefit of this goal is that training would be heavily skewed towards classics, so mostly out of copyright
@Gargron funny I had this exact same exact conversation in my head yesterday about something in a paper I am writing. I decided to define the method as ML throughout and I will most def argue to keep it as such even if any reviewer/editor will suggest to me otherwise 😌
@Elisa @Gargron it's possible to be more specific or "technically correct", with terms like LLM or neural net, depending on the context of course - I agree though! often I use "AI" in conversations for the sole purpose of putting on a heavily sarcastic tone, pausing, and padding the word with heavy air quotations. works wonders for my mental health 😏
@Elisa suddenly thought it sounded like I was trying to "one up" you, so I want to say I like what you're doing, and just wanted to add my thought 😊

@Gargron

Maybe it should be consistently written as "AI" instead of as AI.

@sibrosan @Gargron Or maybe A"I"?
It is definitely artificial. Just not intelligent.
@sibrosan @Gargron yes, several people do this already, I often opt to myself if giving presentations
@sibrosan @Gargron I like "AS" for artificial simulator. I like "AP" for artificial parrot even better.
@Gargron well said. #AI is high throughput data integration, processing and repackaging.

@gpollara @Gargron

But, at some point, doesn't "high throughput data integration, processing, and repackaging" become indistinguishable from "conventional intelligence" though?

@shrikant @gpollara @Gargron Until such time as we figure out how the brain works, no. I don't think it is possible to do it using boolean logic.
@shrikant @gpollara @Gargron That statement hard to prove or disprove, since we don’t have a solid model of the human mind works. And since we don’t we often end up in a non-sequitur that this thing we don’t understand (the mind) must be like the thing we do understand (statistics, AI).

@shrikant
No, conventional intelligence is most certainly not "high throuput".

@gpollara @Gargron

@Gargron

It's another round of Wall Street hype by anti-democracy billionaires, no different than NFT's or cryptocurrency - just another scam.

@Npars01 @Gargron I disagree. NFTs aren't useful, crypto currencies aren't very useful (although they do have their use cases). ML is very useful. It is abused, extremely abused for that matter, but it also helps solve a lot of problems, which makes ML in and of itself not a scam.
Sure, the ways in which it is used can be scams, but the concept at core is far from it.

@arh
I agree, but would even go a step further.

Is it "machine learning" or "machine assisted/aided learning"? Who is the learning entity?

Similar to "computer aided design". The real designer is the human in front of the computer, not the computer itself.

@Npars01 @Gargron

@arh @Npars01 @Gargron ML is not a scam, this is true.

General-purpose chat bots fooling users into thinking they have actual useful information and creative output to provide to people based on queries is a scam. Same for the image generators ripping off hard-working artists.

@Gargron and sadly nobody will become more intelligent by using it
@Squirlykat @Gargron Only ask the question : would I give the same answers.. and you prove to be more intellignet.
@Gargron It's another huge tech scam, like crypto...
@Gargron Then we can agree that at this stage AI safety is another trick of marketing.
@dimillian @Gargron AI can be enormously problematic without it actually being intelligent
@finestructure @Gargron problematic? For now it's mostly helpful.
@dimillian You should maybe read up on how AI is being used for spam, misinformation, propaganda, impersonation, to name a few

@Gargron As usual, rms is telling the truth and nobody listens ;-)

"I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_."

@ilgaz @Gargron Unless you are a Master in ethymology, you - as most humans - have no safe idea of what a word _mean_ and I guess that any 'AI' with a good ethymology database can beat you at that game :)

@ilgaz @Gargron wow, that sounds surprisingly similar to the average human. You only have to look at recent election results in just about every western democratic nation to see proof of that.

It’s so reductive to talk about AI as snake oil but at the same time attribute intelligence to human beings who are generally showing nothing of the sort 😂

@daan @ilgaz @Gargron Whenever someone tells me “this or that is not _real_ intelligence”, I ask what intelligence is;

So far in 100% of those replies the definition will exclude large portions of humankind.

@Gargron
The deceit doesn't start there, but in the A. None of this is any more artificial than most of what surrounds us, certainly all of our software. It's automation. It's also, in most cases, inference. So Automated Inference.

The questions are what is being automated, who stands to benefit, who is at risk, and what are the guardrails around.

@Gargron that, the climate impact and the underpaid annotators are the three points I always try to fit in when I talk to people about LLMs and their chatbot interfaces.

@Gargron
I have been calling it an algorithmic tool. I like @emilymbender suggestion to reference it as automation.

#AI

@kegill @Gargron @emilymbender most automation is useful, designed to be productive e.g. combine harvesters, or kitchen mixers.
LLMs are not designed to be productive, they are just producing plausible sounding text. Calling them automation is a disservice to actual automation.

@sleepyfox

Automation has always displaced labor.

LLM tools that help ESL students identify grammatical hiccups in their papers are *useful*.

LLM tools that help me brainstorm a “how to” document for undergraduates are *useful* especially when they remind me of things I’d forgotten to include. (Too close to the material: remembering what you used to not know is hard.)

In a sweeping generalization, almost all tools have Dr Jekyll/Mr Hyde characteristics.

@Gargron @emilymbender

@Gargron this is the same problem we had when expert systems were called "AI" https://en.m.wikipedia.org/wiki/Expert_system

I guess the temptation to think a problem is solved is too high.

At least we're consistent in calling rubbish systems "intelligent" 😂

Expert system - Wikipedia

@Gargron
Isn't that exactly why it's called *artificial*?
@Gargron Reminds me how it is fashionable to say AI subtitling, AI voice etc. I am old enough to remember when these things were called speech recognition, text-to-speech etc.

@Gargron @darylgibson

A college professor of mine back in 1983 said "'AI' is what we call software we don't know how to write yet." I think this neatly captures the problem we have talking about current "AI". In 2000, nobody knew how to write software that would drive cars, write poetry, play grandmaster-level chess, or summarize text, so those were considered to be examples of what AI might accomplish. Now we know how to write systems that do those things, so they are no longer AI.

@isomeme @Gargron @darylgibson I agree with most of what you say, but in 2000 we knew how to write software that could play grandmaster-level chess and summarise text. And now we still don’t know how to write software that drives cars or write poetry.

@ahltorp @isomeme @Gargron @darylgibson well, not *good* poetry, anyway. 😉

I weep for humanity that so many people have been impressed with the level of “art” these LLMs and generative art (pixel plagiarism) machines spit out. This is what happens when we fail to properly teach the humanities in school.

@KydiaMusic @ahltorp @Gargron @darylgibson

AIs aren't producing great art (yet), but they're easily outperforming the average human. I've seen a few AI-generated works that were quite compelling. As one of my favorite proverbs puts it, the amazing thing about a dancing bear is not how *well* it dances, but that it dances at all.

@isomeme @ahltorp @Gargron @darylgibson true, but great art is partly defined by the fact that the average person *can’t* do it. Innovation and originality are often other factors that elevate art to greatness. And of course, meaning, motivation, and inspiration, which require sentience—and the ability to move others to feel something, which requires empathy for both the creator and the receiver.
There are lots of things the average person can’t do as well as a machine, like math calculations.

@KydiaMusic @ahltorp @Gargron @darylgibson

Absolutely. But the number of capabilities that are unique to humans will continue to decrease as AI technology advances. What happens when an AI can write a poem that reduces you to tears with its emotional punch? Pinning our claim to sentience on what computers can't do runs into the same problem as the "God of the Gaps" approach in theology.