It's artificial but it's not intelligence. Let's call it machine learning.

Analytical machine learning is an excellent technology for identifying patterns in data.

Generative machine learning is a Venture Capital-fuelled dumpster fire that will crash as soon as people have to pay the actual non-subsidised price for it.

#machinelearning #aibubble #aicrash

@mrundkvist from Cambridge dictionary "the ability to learn, understand, and make judgments or have opinions that are based on reason: " Note especially 'understand, and make judgments or have opinions that are based on reason:" Neither of the last are apparent in AI - it learns patterns, yes it has the ability to learn. Let's see what Microsoft says about CoPilot "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk." Source https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/termsofuse. Do they themselves seem to say that it's intelligent?
Copilot - Terms of Use

Microsoft Copilot

@iam_jfnklstrm @mrundkvist

That does not explain the motive, what motivate intelligent people to learn and understand, only say that they have the ability.

There is no reason for them to use that ability and what will be the outcome of that ability.

@dozymoe @mrundkvist true, but we have the capacity to reason - which is not true for AI. But on the other hand - forming opinions based on slop from the internet maybe make many people equal to AI.

When an AI says 1+3 = 400 it 'believes' so and do NOT stop and think about if it is reasonable. But, I hope, most people 'understand' that it is not possible.

@iam_jfnklstrm @dozymoe @mrundkvist let me explain myself Martin. Humans take information in, they learn, they store that information in some form in brain. When a new situation comes, they search for that information and make up inference. In principle, AI is same. It trains on data, it learn, and it performs inference based on that learning. And obviously, with inappropriate learning, or no learning output will be wrong. Same wrong output you will get from children with no learning.
@Fxiz @dozymoe @mrundkvist you have a lot of points and I touched on some of them myself. And I was a blit sloppy... I believe Intelligence is more than just pattern recognition, storing and inference (so does those AI researchers trying to build completly different models too). There are philosophical questions about intelligence that the industry don't adress. Eg the theory of mind, but I still state that AI don't question it's own results in the same way as humans do
@iam_jfnklstrm @dozymoe @mrundkvist yeah. Everyone has their own definition. The only reason AI is not able to reach the level of intelligence you are thinking is the peak, is because the kind of AI we use are ment for it. LLM models/chatgpt is more about answering and answering correctly. It does not support a natural flow of conversation. But that it can be build, their is agentic AI. Such AI are not available because they are use case specific, and requires even better level of AI development
@Fxiz @dozymoe @mrundkvist I totally agree. The reason for the miscalculation of the number of r in strawberry is due to the internal workings om LLMs (tokens). With developed models it might be different. I just heard about a shopping agent getting ripped off. It bought the cheapest item and could not foresee that an apple watch for 5 usd is obviously fake. So for the moment my stance is that AI is mostly A with limited I and that it depends on a limited definition of intelligence that the I ever appeared. With another perspective the I will disappear. But that is more of a philosophical question than the abilities of AI.
@iam_jfnklstrm @dozymoe @mrundkvist fair enough. You are making your stance based on current state of AI and how it is been used. That's fair. I am just saying that maybe, corporate is doing things wrong, AI is not the best at the moment, and everything. But it has potential. Are we agreeing on that? And potential can only be realised if people actually try to do it right instead of just disregarding it completely. We know the mistake with fake watch, now we can add fallback check for it

@Fxiz @dozymoe @mrundkvist I agree - the models that are developed on other premisses (trained as children learn the world) have potential to surpass LLM's (they are built on another logic). And if we develop AGI - our discussion is suddenly moot πŸ˜‰
I just hope that those in control also develop empathy, humility and critical thinking around what kind of 'beast' they're releasing that day.

The future will tell if is hope or horror we are talking about today.

So, we agree - nice to exchange thoughts with you as civil people instead of flaming. I wish you a great day

@iam_jfnklstrm @dozymoe @mrundkvist yes indeed nice to exchange thoughts. I wish a good day to you too
@Fxiz Isn't it the other way around? I think LLMs are designed for natural conversation, rather than factual research.
@limpatzk yeah you are right. They are designed for natural conversation. But the context of conversation does not integrate well with the kind of model(very big). So conceptually context is treated as a seperate knowledge in LLM models. And the context size is also kept very short as it has high potential to degrade correctness of response. That's what, in my theory, does not support natural flow of conversation which envolve facts and context. Day in talk with llm is maybe fine, maybe
@limpatzk let me explain. You talk with chatgpt about French revolution. You say something, it replies something. Response is from facts(it's learning). That dialogue is now context for further responses. But now LLM needs to search in it's learning, plus the context, to respond with something with that is accurate (as per it's learning) and suits for context of conversation. Current LLM is bad at this kind conversation. Day in chat does not involve much facts, so it's easier to satisfy both