That's 5D-educational chess.

#FuckGenAI #ChatGPT #GenAIsucksCamelDong

@Eatsbluecrayon Another demonstration would be to bring a chessboard to class, along with extra pieces, and have the class play chess with an AI of their choosing. They’ll get to watch with their own eyes as the AI fabricates positions and pieces.

This is especially useful because computers have been beating human opponents at chess for a long time now, so people know that chess is something computers can do. That an AI can’t indicates it is worse than its predecessors.

@WhiteCatTamer @Eatsbluecrayon that's cause it's not an AI, but a really fancy autocorrect. DeepBlue and the like were probability machines using complex computational algorithms and ChatGPT just strings words together that make some kind of grammatical sense, in which grammer =! logic. The mistake that people are falling for is the thought that language is mathematical. It is not. Language is culture, which cannot be summed up through math and numbers. Culture is the synthesis of human emotion and connection. No machine can replicate that. Ever.

@jadedtwin @WhiteCatTamer @Eatsbluecrayon

If you don’t believe everything can be expressed in numbers then consequently you must believe there is some “magic” in which numbers are meaningless.

That simply isn’t the case. From the count of neurons firing, to their relations and positions: every “emotion” can be described with numbers.

Magic or numbers. That’s the choice.

Thus: language is math (numbers encoded).

@altruios You've confused the map for the territory.
@jonahgibberish
Signals/waves are still describable by numbers, even as the note paper representing them also can be describe with numbers.

@altruios @jonahgibberish

But there are no artificial numbers. They are just numbers. We can't create them.

And there is no artificial space. It is just space. We can't create it.

Some people say, there is artificial intelligence. But it is just intelligence. We can't create it.

Evolution sprouted beings, who show intelligent behavior. And those beings _procreate_ and die.
A computer program, stuffed with random human communication, doesn't even have an environment, it has to survive inside.

@altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon

Sounds like determinism.

> From the count of neurons firing, to their relations and positions: every “emotion” can be described with numbers.

As I remember, we can't do that because:

1) we still don't know how the human brain and consciousness works

2) On MRI we can see some common (for each human being) zones in brain, firing up when some common emotion occurs, thats all.

3) https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Your brain does not process information and it is not a computer | Aeon Essays

Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

@evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon

Regarding the brain not being a computer…
I get the articles point: but there’s some (willful?) blind spots in those arguments. Short form comments aren’t a way to have that discussion. I’ll get my thoughts together on that.

Of course I mean to say I don’t think humans can literally see or process that much info to that accuracy, if that wasn’t implicit. Laws of physics still apply.

@altruios
Reply guy here!
Neurogenomics!

There's absolutely no evidence that cognition can be ascribed to numbers, or any measurable chemical process.

Down to brass racks, if you pull that magical thinking in a neurobiology paper reviewers 1, 2, and 3 will kick you to the curb.

There just isn't any measurement to support any of that.
It might be useful as a model to thinking about cognition in machine terms. It's just not neurobiology.

@evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon

@evgandr @altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon
True, *we* cannot describe every emotion by numbers.
That doesn't mean it is impossible.

@holdenweb @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon

Let’s keep it dumbed down: here’s your proof.

1) Name an emotion.
2) I assign a number:
3) done. Infinite numbers means every emotion is represented a number.

It can be done…

The real issue is ordering the data sensibly.

@altruios @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon yet another copout. You can’t say which emotion is associated with which number. You can’t even tell me whether the number of emotions is a countable or an uncountable infinity.

@holdenweb @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon

It’s arbitrary: we choose the map.

We have only so many chemicals in our brains. The emotional state can be represented by the chemical balances of various neurotransmitters, product with the electrochemical state of the brain.

Humans may never be able to represent that level detail: I agree. But, in theory, it could be done.

@altruios @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon so quantum phenomena play no role in your universe? Nothing is probabilistic? (Sorry if this seems like an inquisition, I find this genuinely interesting).

@holdenweb @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon you are inserting… something into this conversation… and I don’t know what (bias) that is.

Probabilistic outcomes are described by numbers… what makes you think I would think otherwise?

It’s numbers all the way down.

@holdenweb @joosteto @evgandr @jadedtwin @WhiteCatTamer @Eatsbluecrayon also: I don’t think you are dumb or an asshole. Basic respect for a fellow human: regardless of if we see eye to eye on “numbers vs magic” :)
@holdenweb @joosteto @altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon
Ugh, man, do you confused this social network with Twitter?
@evgandr @joosteto @altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon not for a moment, but education is important. Or should bullshit go unchallenged here?

@holdenweb @joosteto @altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon

We don't share knowledge or start discussion in #fedi in such aggressive way 

@evgandr @altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon our brains DO process information - but it’s NOTHING like a computer, and don’t let anyone tell you it is.

@Eatsbluecrayon @jadedtwin @WhiteCatTamer @altruios

One would have to define what ”expressed in numbers” and ”described with numbers” mean. Is Pi a number? And if it is, does the symbol express it, or is it an expression for the inability to describe that function as a number?

But the main thing is this: Language has no fixed meaning. A symbol (like a word) is not one thing, but many. And it shifts.

Math doesn’t.

@jadedtwin @WhiteCatTamer @Eatsbluecrayon @altruios

I’m no mathematician, but language would be something like the n-body problem. In a sense it can be expressed as numbers, but it can’t be solved.

@thelovebing @jadedtwin @WhiteCatTamer @Eatsbluecrayon
It can be expressed as numbers. As you say: we agree? The meaning of words is a vector list of numbers. Here’s a video on the subject:

https://youtu.be/iErmK_sJtag?si=HaIk2FhXQ9BHNKe4

Word Embeddings: Word2Vec

YouTube

@jadedtwin @altruios @WhiteCatTamer @Eatsbluecrayon

I don’t do Youtube. If you have a point I’m sure you can make it yourself.

@thelovebing @jadedtwin @WhiteCatTamer @Eatsbluecrayon the video explains how you use math on language to encode meaning. Examples like “king - man = queen” relational vector math (where each word is a vector {list of numbers}). Word2vec is the keyword to research more.

Those numbers are variable, depending on the relational web of the vocabulary of the language, of course…

@altruios @thelovebing @jadedtwin @WhiteCatTamer @Eatsbluecrayon Which has, as it should, a "limitations" section at the end... which only briefly covers the many limitations of such things.

"Context" is a massive one which changes from person to person, use to use let alone from culture to culture and then from language to language (while translation apps do a reasonable job, especially at the basics, they'll always fall down with e.g. more complex prose where the translation becomes opinion).

@jadedtwin @altruios @level98 @WhiteCatTamer @Eatsbluecrayon

Precisely. ”Queen”, for example, is a word that has (had) many meanings. ”Queen Kristina” of Sweden –again, an example– wasn’t a queen. She never married, and the word used to signify the spouse of a king. She was crowned a king, though. Which of course changed the meaning of those words, at least in Sweden, but not the same way then as it later did.

@level98 @altruios @WhiteCatTamer @jadedtwin @Eatsbluecrayon

It is enormously complex, chaotic even. And while every meaning of a word might be described in numbers (or a dictionary), those descriptions are not very exact (and that’s why translation is really hard, because it isn’t just about denotation).

@level98 @altruios @WhiteCatTamer @jadedtwin @Eatsbluecrayon Margritte explained this better than anyone, the proof thereof being that tech bros mostly don’t get the explanation.

@altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon
I see this the other way around.

Humans use numbers to understand things. Every new thing is assigned measurements and studied. This helps a lot in understanding how everything works.

When studying something new, we start by assigning them and seeing how they behave. We need different rules for different processes.

The numbers are not the starting point, they are the way we try to make sense of things that are very complex. If we start with only numbers, we will not be able to figure out anything. We need to give the numbers meaning, and discover the rules behind them.

I'm reminded of Socrates. If you keep asking why, there will be a moment where you don't know. Why do things fall? Well, because of gravity. Why does gravity exist? Well, the earth's mass attracts more mass. This can be explained by diving into molecules and atoms and ions and neutrons and whatnot, but why are there atoms? And why was the big bang? And why does anything exist at all?

For us to assign numbers, we need to understand so much more. And a computer is bound by the logic we give it. Yes, it can make some things up by itself, but those things can be just as wrong as our age-old belief of the flatness of the earth. And the computer will lack knowledge we didn't know it needed. Or things we don't know ourselves.

The world is a magical place, and we are by no means done discovering and making sense of it. ❤️

@altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon Perhaps you've not read, or understood, Godel's Theorems etc.

Maths *is* amazing, for example, in being "unreasonably good" at describing the workings of the universe to incredible precision.

HOWEVER, as well as Godel, we also have hand-wavy math of QFT e.g. the "Yang–Mills existence and mass gap" Millennium Prize Problem.

Let alone struggles to describe "complex", "chaotic" etc. systems.

Maths limitations are as incredible as what it does.

@altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon numbers can encode language. This does not mean that language IS numbers. Correlation does not imply causality …
@altruios @jadedtwin @WhiteCatTamer @Eatsbluecrayon also remember that when you describe something numerically you can only do so within the accuracy limits of the numerical representation. There’s always going to be quantisation noise. The thing I like about analogue systems is to get from one value to another they have to go through all (an infinite number of) points in between.

@holdenweb
representation is doing lifting there…
I agree we cannot represent all the math correctly. Normal numbers are the bulk of numbers and they can’t be represented on the computer… Doesn’t mean normal numbers don’t exist.

Analog systems are still describable by numbers. Digitization is a different issue.

Our limits of representation are not the limits of numbers in the abstract.

You bring up good points, though. Quantization is an issue with representing numbers.

@altruios
Well yes, but my real point is that no matter how accurate the numbers you still can’t describe consciousness that way. But I doubt that will stop you trying🙂

@holdenweb you assert that it can’t be done.

Okay: asserted noted. Any actual argument or reason? (Other than it “feels/seems” like to can’t be done).

@altruios Sorry, it’s you who need to provide the argument and reason.

@holdenweb
The reasoning is: there is no magic. Magic being a thing which has no attribute which can be measured: yet still has a measurable effect… a contradiction in principle. Therefore everything that exists can be measured… including processes we don’t yet understand (consciousness is a process that happens, not something that just is).

We may not be able to measure everything accurately… but that’s a limit in human’s abilities.

@altruios well, exactly. Don’t know about you, but I’m intrigued that we can prove perfect accuracy is impossible. And if everything’s an approximation, what can we believe?

@holdenweb prove? Not quite. Humans will (probably) never make a machine that can represent normal numbers - but we may yet invent something awesome beyond our current comprehension that can.

I don’t think that’s likely… but the probability of that happening is only approximately zero.

All knowledge is an approximation of reality, our experience is subjective through that approximation (like how is color is a representation of wavelength and only in your brain).

@altruios sorry. It’s actually zero. Infinite accuracy is impossible with finite resources.

@holdenweb infinity when represented physically: is a process that never ends. Defining a measuring function to run forever and increase accuracy overtime converges to the described infinity.

Close to zero: practically zero… not zero! we don’t know what we don’t know yet.

Besides that: there is the concept of useful accuracy - Where further precision offers no more predictive power…

@altruios Practically zero? There is only zero and non-zero!

And yes, of course good enough is good enough for practical purposes … but then that’s no longer theoretical is it?

@holdenweb practically zero is non-zero…? Did that really need to be specified?

Practical experiments test/verify/refine theory…

@jadedtwin @Eatsbluecrayon This demonstration is for others, not for people who already know what is colloquially known as AI is actually doing.
@WhiteCatTamer @Eatsbluecrayon sorry if my post came off as harping at you. I meant that more as an additional explanation to what you're saying (sometimes my autism makes my words not so good)
@jadedtwin @Eatsbluecrayon It’s fine, I get where you’re coming from; being specific about language is important. I use “LLM” when referring to these things usually, but if I’m talking to, say, a group of 4-6th graders (9-11 years old), who would likely have played chess before, have access to chess programs and the Internet and are just getting to a point where homework might require research, illustrating to them the flaws of LLMs inoculates them to AI-BS.
@WhiteCatTamer @Eatsbluecrayon totally fair. I was a teacher/tutor at my local collage so I'm more used to addressing adults on these kinda things. With young ones ya gotta meet them at their level

@jadedtwin @WhiteCatTamer @Eatsbluecrayon

> that's cause it's not an AI, but a really fancy autocorrect.

That is a little annoying

Go to DeepSeek 's web page, https://chat.deepseek.com/, turn on the "deep think" toggle, enter a query and watch it's internal process laid out explicitly

The LLMs and Transformers at the core are implementing a "stochastic parrot" (but what am I. What are you?) but the end result is ( machine) intelligence. That is the best term for it.

@worik @jadedtwin @WhiteCatTamer @Eatsbluecrayon

LLMs are not "fancy autocorrect" but they ARE fancy autocomplete. Most of their training is just any kind of text that makes sense, and only on the final stages of training you feed it conversations between "user" and "assistant", to specialize it in following instructions. The illusion of a personality disappears as soon as you're allowed to request the LLM to keep completing the text: after autocompleting the answer of the "assistant" it starts autocompleting the message of the "user". LLMs are impersonation machines, it's just that in most cases they're made to impersonate the "assistant" side. When I have access to the raw text autocompletion of a LLM I have a bit of fun seeing what it autocompletes as the user, or how it autocompletes a conversation that is not a user-assistant conversation. For me it demystifies this magic thinking that people have about LLMs.

The "thinking" that some LLMs do is just an extension of a technique called "chain of thought", to make it have more information in the context and to be able to resolve contradictions. It doesn't need to be a "thought" in the same language or even in natural language at all, it works just as well if it appears as random symbols to us. It's just that deepseek trained it to be readable. It's not real thinking. It works better than without it, certainly, but real thinking would involve much more than just generating a bunch of stuff to improve the quality of next generations. Actual thinking involves abstract non-verbal thought as well as being able to learn from experience (even from just one single experience).

The only way LLMs learn nowadays is by "training" on a lot of data so it eventually recognizes patterns, and you can't just train it on one conversation to make it learn, it doesn't work like that.

@starsider @jadedtwin @WhiteCatTamer @Eatsbluecrayon

That is nonsense.

Define thought. I challenge you!

The insight from (I think) seventy five years ago by Turing is that we do not know what intelligence is but we know it when we see it

These machines are exhibiting intelligence. If so, then so.

@worik @jadedtwin @WhiteCatTamer @Eatsbluecrayon

Thought refers to the mental processes involved in cognition, reasoning, imagination, memory, and planning. It includes both the conscious and subconscious mental activities that allow individuals to interpret, evaluate, and respond to their environment and experiences.

LLMs have a very narrow and limited version of this:

They don't have imagination, instead they "think" of something and deduce that something else is above or behind or inside, etc. Some multimodal models have something resembling imagination.

It doesn't have subconscious activity or inner abstract thought, although recurrent depth models (latent reasoning) kind of resembles abstract thought.

Memory is a hack: LLMs don't have recollection of previous conversations. Instead what some systems do to give it "memory" is to store chunked conversations in a vector database, and inject these chunks in the context when some vector seems relevant (when two embeddings have a short distance).

LLMs don't have environment and experience. They're fixed in a point of time given by their training and fine-tuning, but only after receiving a staggering amount of "environments" and "experiences" in text form.

LLMs are one piece of the puzzle to allow machines to think like a human, but they can't really think to learn, and currently they're extremely limited.

By conversing with a LLM you cannot teach it to, for instance, elaborate a mathematical proof. You can instead feed it a lot of mathematical proofs and it becomes better at making them or checking them, but they still fail much more than a human (it's been tried with a fine tune of R1). Because it doesn't come from its experiences. It doesn't come from them realizing their mistakes in one conversation to learn them in another conversation.

If they can't learn from experience, in my opinion that's not true thought. It's only part of it.

If it was not obvious by now, I'm really interested in how LLMs work and how to make thinking machines that can become individuals. But the current crop of LLMs ain't it. Also OpenAI and other corporations waste way too much energy and spam our servers, for goals that do not align with mine at all. I very much prefer to play with small LMs that run in my computer, without sending my private data to them.

That's another issue, they have staggering amounts of private data, and even if your terms of service promise that they won't be used for training (and assuming they keep the promise), your data is still very useful (for example for evaluation and validation of training batches) so they will keep it and they could still be leaked or sold in the future.

@starsider @jadedtwin @WhiteCatTamer @Eatsbluecrayon

> make thinking machines that can become individuals.

That is unnecessary for intelligence. What is necessary is the ability to exhibit intelligence, which these models do.

The ethics of how they are trained are woeful (Meta using pirated books FFS), misunderstandings about how they can be used are widespread, andoutright scams are everywhere, but they do exhibit intelligence

@starsider @jadedtwin @WhiteCatTamer @Eatsbluecrayon

> Thought refers to the mental processes involved in cognition, reasoning, imagination, memory, and planning.

That is unhelpful. But by that definition these things definitely have thoughts

Please do have a look where DeepSeek put therocess on display, it is right there on display.

When it comes to intelligence faking it is making it

@worik @jadedtwin @WhiteCatTamer @Eatsbluecrayon Dude, recently I tried to have deepseek solve a problem and it couldn't, it went in circles for a long time and then the answer was wrong. If it was truly intelligent and being capable of actual thought, I could tell it where it went wrong and it would be able to learn from it.

But do you know what happens when I send another message?

The LLM sees the previous messages, but it doesn't see the previous thinking blocks, to save context tokens.

Try to talk to it about things it mentions in its thinking blocks. You can't, because it literally doesn't have recollection of ever having any of those thoughts.

You can change the frontend to include all the thinking blocks, but in most cases it performs worse because the context is filled with fluff (and because all of the training data for fine tuning only ever has a single thinking block).

Before deepseek R1, and even before chatgpt o1 (the first version with "thinking"), I made a local LLM "think" by having a multi user conversation (basically several NPCs) and then turning one of the characters into the actual "assistant" and the other one into "assistant's thoughts". It's just lengthening the conversation with multiple argumentation lines so it can figure out the best one.

Is a chatroom "thoughts"? Is the person that talks the most what constitutes as "thinking"?