I find it amazing how unwilling people are to accept that current "#AI" tech (including the "LLM" tech that I call #MOLE Training) is not intelligent, by any meaningful definition of the word. The nonsense arguments they use to wriggle out of this conclusion are nothing if not creative.

Most common is the consensus reality wriggle; more people talk as if they think Trained MOLEs are intelligent, therefore they are. So if most people think perpetual energy is possible, then it is? Nope.

(1/?)

Another wriggle I strike frequently is that although the current MOLE Training tech can't produce intelligence yet, future versions will. Which is equivalent to saying that although current fusion energy tech can't produce cold fusion, future versions obviously can. Nope.

The existence of one body of tech that didn't used to exist, with hard limitations, doesn't prove that tech without those limitations will automatically come into existence in the future.

So much magical thinking.

(2/2)

@strypey

What is intelligence?

Is my dog intelligent? A child? A baby? A tree?

I can hold a conversation with an LLM, I can correct it, and it corrects me

Close enough?

@worik
> I can hold a conversation with an LLM, I can correct it, and it corrects me

You are anthromorphising. Like saying a windscreen wiper is clearing the windscreen *for you*.

The algorithm is generating statistical guesses based on your inputs. Like a spell checker autocorrect. There is no conversation. It's not "correcting" you. These terms imply decisions it doesn't make. See;

https://betterwithout.ai/mind-like-AI

> Close enough?

Not even on the same planet, let alone in the same ballpark.

Mind-like AI | Better without AI

Better without AI

@strypey that is bad article

We have very little idea of how human intelligence (and what about my dog?) Works, so the entire premise is false.

For seventy years there has been a standard, amd a test, established by Turing (which is why his biographical film is The Imitation Game)

Now machines are closing in on passing that test, you are moving the goal posts.

Unfair

Whatever, LLMs (particularly transformers) are the greatest computer advance I have ever seen

@worik You clearly really *want* machines that think to exist. I'm sorry the inherent limitations of the current language autocomplete algorithms disappoint you so much. But they are what they are.

I asked pi.ai, "are you intelligent?" It's output was;

"I'm not intelligent in the same way that humans are - I don't have consciousness or emotions, and I can't learn in the same way that humans do. I'm just a computer program designed to simulate intelligence!"

@strypey

Look up the Turing test

Imitation is the name of the game

It I s not what I want it is what I do, every day. My reality

Yes. I interact with a machine that simulates intelligence

Look up what the definition of AI has been since 1950. That is what it is

@worik
> Look up the Turing test

I learned about the Turing Test in the 1980s. I note the irony of referencing a test from 1949 as your gold standard definition of AI, while claiming that a paper published in 2020 is "out of date". This is called cherry-picking. It's not a scientific approach.

> I interact with a machine that simulates intelligence

In other words, not intelligent, Just simulating it.

@strypey the Turing Test has been the gold standard. It is not out of date.

Turing's insight was we do not know what intelligence is, but we know it when we see it. He called it the Imitation Game. So when you say "just imitating it", that is the very definition of AI

The goalposts were moved by jealous scientists who were gobsmacked that playing statistical games with words got us so far

I too was surprised, but I recognise the achievement.

(1/3)

@worik
> So when you say "just imitating it", that is the very definition of AI

You didn't even start reading the article you declared out of date, did you? The very first section lays out a brief history of the term, and various kinds; strong AI, weak AI, AGI, ANI, etc. I suggest you read it;

https://doi.org/10.1057/s41599-020-0494-4

(2/3)

"... when it is argued that computers are able to duplicate a human activity, it often turns out that the claim presuppose an account of that activity that is seriously simplified and distorted. To put it simply: The overestimation of technology is closely connected with the underestimation of humans."

#RagnarFjelland, 2020

https://doi.org/10.1057/s41599-020-0494-4

#AI hype is like blockchain hype. It's not as useless as critics think, but way less transformative than boosters think.

(3/3)

"... the belief that AGI can be realized is harmful. If the power of technology is overestimated and human skills are underestimated, the result will in many cases be that we replace something that works well with something that is inferior.."

#RagnarFjelland, 2020

https://doi.org/10.1057/s41599-020-0494-4

This is what's happening, eg government thinking that replacing human judges with Trained MOLEs allows them to cut costs *and* get more "rational" judgments. It doesn't do either.

#MOLE #AI #AGI