I find it amazing how unwilling people are to accept that current "#AI" tech (including the "LLM" tech that I call #MOLE Training) is not intelligent, by any meaningful definition of the word. The nonsense arguments they use to wriggle out of this conclusion are nothing if not creative.

Most common is the consensus reality wriggle; more people talk as if they think Trained MOLEs are intelligent, therefore they are. So if most people think perpetual energy is possible, then it is? Nope.

(1/?)

Another wriggle I strike frequently is that although the current MOLE Training tech can't produce intelligence yet, future versions will. Which is equivalent to saying that although current fusion energy tech can't produce cold fusion, future versions obviously can. Nope.

The existence of one body of tech that didn't used to exist, with hard limitations, doesn't prove that tech without those limitations will automatically come into existence in the future.

So much magical thinking.

(2/2)

@strypey

What is intelligence?

Is my dog intelligent? A child? A baby? A tree?

I can hold a conversation with an LLM, I can correct it, and it corrects me

Close enough?

@worik
> I can hold a conversation with an LLM, I can correct it, and it corrects me

You are anthromorphising. Like saying a windscreen wiper is clearing the windscreen *for you*.

The algorithm is generating statistical guesses based on your inputs. Like a spell checker autocorrect. There is no conversation. It's not "correcting" you. These terms imply decisions it doesn't make. See;

https://betterwithout.ai/mind-like-AI

> Close enough?

Not even on the same planet, let alone in the same ballpark.

Mind-like AI | Better without AI

Better without AI

@strypey that is bad article

We have very little idea of how human intelligence (and what about my dog?) Works, so the entire premise is false.

For seventy years there has been a standard, amd a test, established by Turing (which is why his biographical film is The Imitation Game)

Now machines are closing in on passing that test, you are moving the goal posts.

Unfair

Whatever, LLMs (particularly transformers) are the greatest computer advance I have ever seen

@worik You clearly really *want* machines that think to exist. I'm sorry the inherent limitations of the current language autocomplete algorithms disappoint you so much. But they are what they are.

I asked pi.ai, "are you intelligent?" It's output was;

"I'm not intelligent in the same way that humans are - I don't have consciousness or emotions, and I can't learn in the same way that humans do. I'm just a computer program designed to simulate intelligence!"

@worik
See also this 2020 paper by Ragnar Fjelland, published on Nature.com, entitled 'Why general artificial intelligence will not be realized';

https://doi.org/10.1057/s41599-020-0494-4

No significant changes have happened in any area of AI tech since then. Just more compute added to existing MOLE Training methods.

@strypey

That was before transformers became known

It is out of date

My point is, and you refuse to listen, that semantic hair splitting aside I now have a machine I can ask questions of, in natural English, it gets my meaning o r I can correct it, and get the information I want

This is a huge deal. It will change history. It happened I 2022

The scammers are hard at work using AI to bolster scams, but that does not detract from the fact that we are witnessing an industrial revolution

@worik
> That was before transformers became known

Which ones?

> aside I now have a machine I can ask questions of, in natural English, it gets my meaning o r I can correct it, and get the information I want

This is not new. Search engines were doing this long before the ChatGPT hype train got started. What I'm disputing is whether elaborate autocomplete is a form of *intelligence*. You can heavily dilute the meaning of "intelligence" to include it, but I don't see any value in doing that.

@strypey

>> That was before transformers became known

> Which ones?

Transformers are the scientific breakthrough that made these LLMs (and other useful neural networks) possible.

I have not studied them. The main paper, which I have been too busy to read, is called "Attention Is All You Need"

@worik
> Attention Is All You Need

This one?

https://doi.org/10.48550/arXiv.1706.03762

I've read the abstract. It describes a better way to do MOLE Training, and has little or nothing to do with the discussion we're having. Which is about whether MOLE Training creates "intelligence", and how much we have to shave off an everyday understanding of what "intelligence" means to answer "yes" to that question.

A Trained MOLE is about as intelligent as a parrot that can say "Polly want a cracker".

Attention Is All You Need

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

arXiv.org

@worik
This video by Ryan George, posted today, pretty much sums up the unimpressive performance of current "AI" when it tries to report knowable facts;

https://www.youtube.com/watch?v=Iz-t4XFYjR4

#MOLE #AI #video #RyanGeorge

Ryan George Debunks AI

YouTube

@strypey To summerise:

* You do not accept the Turing Test as a benchmark for AI. Ok, I disagree but it is semantics

* LLMs are very, unbelievably (to four years ago me) useful.

I think you are do not agree.

I point to my lived reality, and an expert in the field of computing, that they are amazing technology. A "steam engine" moment.

Where we are in total harmony is the scammers, grifters, and greed merchants are out in force

AI cannot do your job.
Your boss bought AI.
You're fired!

@worik
That's a pretty good summary. Thanks for the discussion : )
LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"... AI code assistants invent package names. In a recent study, researchers found that about 5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models.

Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit."

#ThomasClaburn, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

(1/2)

#AI #MOLE #AICoding

@worik

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"All that's required is to create a malicious software package under a hallucinated package name and then upload the bad package to a package registry or index like PyPI or npm for distribution. Thereafter, when an AI code assistant re-hallucinates the co-opted name, the code will run the malware."

#ThomasClaburn, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

(2/2)

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"What a world we live in: AI hallucinated packages are validated and rubber-stamped by another AI that is too eager to be helpful."

#FerossAboukhadijeh, CEO, Socket, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

(1/2)

#AI #MOLE

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"[_Iain] automated the creation of thousands of typo-squatted packages (many targeting crypto libraries) and even used ChatGPT to generate realistic-sounding variants of real package names at scale. He shared video tutorials walking others through the process, from publishing the packages to executing payloads on infected machines via a GUI."

#FerossAboukhadijeh, CEO, Socket, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

(2/2)

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

"Users of PyPI and package managers in general should be checking that the package they are installing is an existing well-known package, that there are no typos in the name, and that the content of the package has been reviewed before installation."

#MikeFiedler, Safety & Security Engineer, PyPI, 2025

https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/

Or, people could take responsibility for what they host on their code and package repositories, and stop hosting and shipping malware. How about that?

#security #PyPI

LLMs can't stop making up software dependencies and sabotaging everything

: Hallucinated package names fuel 'slopsquatting'

The Register

@strypey

Look up the Turing test

Imitation is the name of the game

It I s not what I want it is what I do, every day. My reality

Yes. I interact with a machine that simulates intelligence

Look up what the definition of AI has been since 1950. That is what it is

@worik
> Look up the Turing test

I learned about the Turing Test in the 1980s. I note the irony of referencing a test from 1949 as your gold standard definition of AI, while claiming that a paper published in 2020 is "out of date". This is called cherry-picking. It's not a scientific approach.

> I interact with a machine that simulates intelligence

In other words, not intelligent, Just simulating it.

@strypey the Turing Test has been the gold standard. It is not out of date.

Turing's insight was we do not know what intelligence is, but we know it when we see it. He called it the Imitation Game. So when you say "just imitating it", that is the very definition of AI

The goalposts were moved by jealous scientists who were gobsmacked that playing statistical games with words got us so far

I too was surprised, but I recognise the achievement.

(1/3)

@worik
> So when you say "just imitating it", that is the very definition of AI

You didn't even start reading the article you declared out of date, did you? The very first section lays out a brief history of the term, and various kinds; strong AI, weak AI, AGI, ANI, etc. I suggest you read it;

https://doi.org/10.1057/s41599-020-0494-4

(2/3)

"... when it is argued that computers are able to duplicate a human activity, it often turns out that the claim presuppose an account of that activity that is seriously simplified and distorted. To put it simply: The overestimation of technology is closely connected with the underestimation of humans."

#RagnarFjelland, 2020

https://doi.org/10.1057/s41599-020-0494-4

#AI hype is like blockchain hype. It's not as useless as critics think, but way less transformative than boosters think.

(3/3)

"... the belief that AGI can be realized is harmful. If the power of technology is overestimated and human skills are underestimated, the result will in many cases be that we replace something that works well with something that is inferior.."

#RagnarFjelland, 2020

https://doi.org/10.1057/s41599-020-0494-4

This is what's happening, eg government thinking that replacing human judges with Trained MOLEs allows them to cut costs *and* get more "rational" judgments. It doesn't do either.

#MOLE #AI #AGI

@strypey But the most important point, that I am trying to make you see, is independent of definitions of "AI".

The important point is the world has changed.

If widespread access to the Internet was a Guttenberg moment (agree?)
this is "James Watt" moment

It is no exaggeration to say that LLMs are as revolutionary s as the steam engine

In that analogy the Transformer architecture is like the reducing valve, the final piece.

Do you deny it's significance?

(1/2)

@worik
> Do you deny it's significance?

Do I deny the hypothetical significance of functional AI being widely available? No. Do I deny that the current parlour tricks are significant, yes, I do.

If projects like DeepSeek or #GhostX result in an AI that can be compiled and run on consumer-grade hardware, with no proprietary dependencies, that will be significant indeed. I'm open to the possibility, but I'm not holding my breath.

(2/2)

For now, the appearance of practical AI is based on systems controlled by DataFarming corporations, and totally dependent on their excessive pyramid-building of hyperscale datacentres;

https://techwontsave.us/episode/241_data_vampires_going_hyperscale_episode_1

They're mechanical turks, with massive remote compute hidden in the machine instead of little people. They're toys, and we should not be building anything else in ways that depend on them.

Data Vampires: Going Hyperscale (Episode 1) - Tech Won’t Save Us

Tech Won't Save Us
@strypey Do you think that computers that can parse natural language nd respond with natural language is not a huge leap in technology?

@worik
> Do you think that computers that can parse natural language and respond with natural language is not a huge leap in technology?

If I say yes, ok, will you acknowledge that this is a *much* weaker claim than the one you came into the thread with?

https://mastodon.social/@worik/114278739583008280

Which I read as saying that this ability to statistically analyze human language, and ...

"... cough up the highest probability answer-shaped object ..."

... is proof of intelligence.