I find it amazing how unwilling people are to accept that current "#AI" tech (including the "LLM" tech that I call #MOLE Training) is not intelligent, by any meaningful definition of the word. The nonsense arguments they use to wriggle out of this conclusion are nothing if not creative.

Most common is the consensus reality wriggle; more people talk as if they think Trained MOLEs are intelligent, therefore they are. So if most people think perpetual energy is possible, then it is? Nope.

(1/?)

Another wriggle I strike frequently is that although the current MOLE Training tech can't produce intelligence yet, future versions will. Which is equivalent to saying that although current fusion energy tech can't produce cold fusion, future versions obviously can. Nope.

The existence of one body of tech that didn't used to exist, with hard limitations, doesn't prove that tech without those limitations will automatically come into existence in the future.

So much magical thinking.

(2/2)

@strypey

What is intelligence?

Is my dog intelligent? A child? A baby? A tree?

I can hold a conversation with an LLM, I can correct it, and it corrects me

Close enough?

@worik
> I can hold a conversation with an LLM, I can correct it, and it corrects me

You are anthromorphising. Like saying a windscreen wiper is clearing the windscreen *for you*.

The algorithm is generating statistical guesses based on your inputs. Like a spell checker autocorrect. There is no conversation. It's not "correcting" you. These terms imply decisions it doesn't make. See;

https://betterwithout.ai/mind-like-AI

> Close enough?

Not even on the same planet, let alone in the same ballpark.

Mind-like AI | Better without AI

Better without AI

@strypey that is bad article

We have very little idea of how human intelligence (and what about my dog?) Works, so the entire premise is false.

For seventy years there has been a standard, amd a test, established by Turing (which is why his biographical film is The Imitation Game)

Now machines are closing in on passing that test, you are moving the goal posts.

Unfair

Whatever, LLMs (particularly transformers) are the greatest computer advance I have ever seen

@worik You clearly really *want* machines that think to exist. I'm sorry the inherent limitations of the current language autocomplete algorithms disappoint you so much. But they are what they are.

I asked pi.ai, "are you intelligent?" It's output was;

"I'm not intelligent in the same way that humans are - I don't have consciousness or emotions, and I can't learn in the same way that humans do. I'm just a computer program designed to simulate intelligence!"

@strypey

Look up the Turing test

Imitation is the name of the game

It I s not what I want it is what I do, every day. My reality

Yes. I interact with a machine that simulates intelligence

Look up what the definition of AI has been since 1950. That is what it is

@worik
> Look up the Turing test

I learned about the Turing Test in the 1980s. I note the irony of referencing a test from 1949 as your gold standard definition of AI, while claiming that a paper published in 2020 is "out of date". This is called cherry-picking. It's not a scientific approach.

> I interact with a machine that simulates intelligence

In other words, not intelligent, Just simulating it.

@strypey But the most important point, that I am trying to make you see, is independent of definitions of "AI".

The important point is the world has changed.

If widespread access to the Internet was a Guttenberg moment (agree?)
this is "James Watt" moment

It is no exaggeration to say that LLMs are as revolutionary s as the steam engine

In that analogy the Transformer architecture is like the reducing valve, the final piece.

Do you deny it's significance?

(1/2)

@worik
> Do you deny it's significance?

Do I deny the hypothetical significance of functional AI being widely available? No. Do I deny that the current parlour tricks are significant, yes, I do.

If projects like DeepSeek or #GhostX result in an AI that can be compiled and run on consumer-grade hardware, with no proprietary dependencies, that will be significant indeed. I'm open to the possibility, but I'm not holding my breath.

(2/2)

For now, the appearance of practical AI is based on systems controlled by DataFarming corporations, and totally dependent on their excessive pyramid-building of hyperscale datacentres;

https://techwontsave.us/episode/241_data_vampires_going_hyperscale_episode_1

They're mechanical turks, with massive remote compute hidden in the machine instead of little people. They're toys, and we should not be building anything else in ways that depend on them.

Data Vampires: Going Hyperscale (Episode 1) - Tech Won’t Save Us

Tech Won't Save Us
@strypey Do you think that computers that can parse natural language nd respond with natural language is not a huge leap in technology?

@worik
> Do you think that computers that can parse natural language and respond with natural language is not a huge leap in technology?

If I say yes, ok, will you acknowledge that this is a *much* weaker claim than the one you came into the thread with?

https://mastodon.social/@worik/114278739583008280

Which I read as saying that this ability to statistically analyze human language, and ...

"... cough up the highest probability answer-shaped object ..."

... is proof of intelligence.