I always wanted to teach a robot to say “I think therefore I am”.
Ok this is crazy, I just saw this word earlier today in the book I was reading—I know it’s primed in my brain now, but really, what are the odds of seeing this again?
Frequency illusion - Wikipedia

And, interestingly, I think the feeling of seeing a new vocab word more often is also apophenia.

Depends, why do you believe you are seeing more often a particular word?

The reason defines whether it is apophenia or not. If you are delusional that it is an alien entity trying to communicate secret information to you in particular, by exposing you to a word more frequently, that’s apophenia. If you know it is the frequency illusion and just find it kinda funny how it feels, then it isn’t. Anyways, it is more often associated with the perception of patterns of causality in things that are random or banal. I’m of the opinion that this comic in particular is not a good representation of apophenia, other than the fact that the protagonist is certainly disconnected from reality.

I think apophenia describes the feeling of a pattern too, even if you intellectually understand that there isn’t a pattern.
It is a clinical term, it doesn’t describe a feeling. If you are not disconnected from reality you do not have apophenia. It can be sub clinical or non pathological, but it is not a vague feeling. It is a concrete belief. I’m sorry if I’m harsh with this. I just hate pop appropriation of psychological terms. They always end up distorted into tiktok garbage.

I think it has gained new meaning beyond being a symptom for schizophrenia, such as the tendency for gamblers to believe they’re on a lucky streak or other illusions that trick the brain into seeing patterns that aren’t there.

Or the wikipedia article is wrong.

Exactly, they do believe it. It’s not a vague feeling that is kind of funny but they actually still know logically it isn’t true. For the person with apophenia, it is true. The gambler does believe in the pattern of the numbers and their luck is due to come. It is not a vague feeling, it is a belief that has overridden their contact with reality. It can be non pathological or sub clinical, as in, it doesn’t affect their day to day life and causes no suffering to themselves or others. But they absolutely believe it and behave accordingly to said belief.

Okay, pareidolia is also a form of apophenia. You can “see” a face in a pile of rocks and be creeped out by it while still understanding that the pile of rocks is not actually a face. Belief doesn’t have to override contact with reality, it merely needs to be present.

A gambler feeling lucky might still understand that luck isn’t real, but the feeling persists.

First of all, I want to start by saying that as a psychologist I love when people correct me about things I’ve studied extensively. No better feeling.

That said. Yes, pareidolia and apophenia are related phenomena. However, the term apophenia is almost exclusively used in a psychiatric context (less so by economist). So, yes, Wikipedia can be and is often wrong. In this particular instance I can notice that the affirmation “Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli.” or “Pareidolia is a specific but common type of apophenia” as it appears today in the English article for apophenia, lacks any sort of source. They are related and we suspect they might come from the same underlying neural mechanisms, but they are distinctly different phenomena. To call one a type of the other is an epistemological error without any proper academic source to back it up.

I am, however, sure that in the context of internet discussions, my expertise is about as good as the perception of anyone who just learned about the word a few days ago.

Coincidentally, to believe adamantly, against any evidence or factual authority that pareidolia is apophenia might actually be classified as apophenia…

EDIT: Just noticed that one of the sources used by the wikipedia article quotes the wikipedia article to claim that apophenia is audio pareidolia. Ultimate circularity achieved. If the source is “Wikipedia said so”, you’ve lost the plot.

Yes, exactly. It is very meta.
Yeah? Well, maybe yours is an illusion, but how to you explain all the dodge rams on the road after I bought mine?
I’m not sure I’d admit to buying a Dodge Ram on the internet…
(Here’s the secret part, I didn’t.)
That’s exactly what a Dodge Ram buyer would say…
Just because I want to have sex with a sexy car with nuts on the back doesn’t mean I’m weird.
It’s a memory optimisation invented by GTA
Give this guy $100 billion!

Seriously the sheer amount of people that equate coherent speech with sentience is mind boggling.

All jokes aside, I have heard some decently educated technical people say “yeah, it’s really creepy that it put a random laugh in what it said” or “it broke the 4th wall when talking”… it’s fucking programmed to do that and you just walked right in to it.

And people are programmed to talk like that too. It’s just a matter of scale.

Oh my goddd…

Honestly, I think we need to take all these solipsistic tech-weirdos and trap them in a Starbucks until they can learn how to order a coffee from the counter without hyperventilating.

The difference is knowledge. You know what an apple is. A LLM does not. It has training data that has the word apple is associated with the words red, green, pie, and doctor.

The model then uses a random number generator to mix those words up a bit, and see if the result looks a bit like the training data, and if it does, the model spits out a sequence of words that may or may not be a sentence, depending on the size and quality of the training data.

At no point is any actual meaning associated with any of the words. The model is just trying to fit different shaped blocks through different shaped holes, and sometimes everything goes through the square hole, and you get hallucinations.

Our brains just get signals coming in from our nerves that we learn to associate with a concept of the apple. We have years of such training data, and we use more than words to tokenize thoughts, and we have much more sophisticated state / memory; but it’s essentially the same thing, just much much more complex. Our brains produce output that is consistent with its internal models and constantly use feedback to improve those models.

You think you are saying things which proves you are knowledgeable on this topic, but you are not.

The human brain is not a computer. And any comparisons between the two are wildly simplistic and likely to introduce more error than meaning into the discourse.

What is this whole “human beings are special and have a soul?”. You happen to experience things you “feel”, that’s it. Everything else is just like a specialized computer, shapped by nature to act in a certain way.
If you think you can hand-wave consciousness, self-awareness, sentience, and qualia away in a tossed-off social media post, good luck with that. 🤣
I don’t possess conscious experience and even I think this whole thread is crap.
The human brain is exactly like an organic highly parallel computer system using convolution system just like ai models. It’s just way more complex. We know how synapses work. We know the form of grey matter. It’s too complex for us to model it all artificially at this point, but there’s nothing indicating it requires a magical function to make it work.

but it’s essentially the same thing, just much much more complex

If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
What’s funny is people thinking their brain is anything magically different than an organic computer.

In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.
what’s your point? do you believe that llms actually understand their own output?
That’s a difficult question. The semantics of ‘understand’ and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that’s the same as human’s thought which I think we’d agree is ‘understanding’; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn’t feel to us like real understanding because it’s trivial and very causal. I think that’s just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
So, in a way the llm construct understands its limited mapping of a problem. But even though it’s using the same input /output language as humans do, current llms don’t understand things at anywhere near the level that humans do.

It’s not a difficult question.

LLMs do not understand things.

If you’re going to define it that way, then obviously that’s how it is. But do you really understand what understanding is?

show emergent abilities

Immediate mark of a someone who is deceiving or has been deceived.

And so if it looks like intelligence, then it is intelligence

Wow, you mean I can understand Chinese?

Chinese room - Wikipedia

If you don’t see the new things that computers can do with ai, then you are being purposely ignorant. There’s tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn’t have before.

And yes, if you can process written Chinese fully and respond to it, you do understand it.

And yes, if you can process written Chinese fully and respond to it, you do understand it.

Understanding is when you follow instructions without any comprehension, got it 👍

You have to understand instructions on some level to be able to follow them. 👍🏻

You can tell a person to think about apples, and the person will think about apples.

You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.

Well, the LLM does briefly ‘think’ about apples in that it activates its ‘thought’ areas relating to apples (the token repressing apples in its system). Right now, an llm’s internal experience is based on its previous training and the current prompt while it’s running. Our brains are always on and circulating thoughts, so of course that’s a very different concept of experience. But you can bet there are people working on building an ai system (with llm components) that works that way too. The line will get increasingly blurred. Or brain processing is just an organic based statistical model with complex state management and chemical based timing control.

You misunderstand. The outcome of asking an LLM to think about an apple is the token ‘Okay’. That is the sum total of its objective. It does not perform a facsimile of human thought; it performs an analysis of what the most likely next token would be, given what text existed before it. It imitates human output without any of the behavior or thought processes that lead up to that output in humans. There is no model of how the world works. There is no theory of mind. There is only how words are related to each other with no ‘understanding’. It’s very good at outputting reasonable text, and even drawing inferences based on word relations, but anthropomorphizing LLMs is a path that leads to exactly the sort of conclusion that the original comic is mocking.

Asking an LLM if it is alive does not cause the LLM to ponder the possibility of whether or not it is alive. It causes the LLM to output the response most similar to its training data, and nothing more. It is incapable of pondering its own existence, because that isn’t how it works.

Yes, our brains are actually an immensely complex neural network, but beyond that the structure is so ridiculously different that it’s closer to comparing apples to the concept of justice than comparing apples to oranges.

I’m well aware of how llms work. And I’m pretty sure the apple part in the prompt would trigger significant activity in the areas related to apples. It’s obviously not a thought about apples the way a human would. The complexity and the structure of a human brain is very different. But the llm does have a model of how the world works from its token relationship perspective. That’s what it’s doing - following a model. It’s nothing like human thought, but it’s really just a matter of degrees. Sure apples to justice is a good description. And t doesn’t ‘ponder’ because we don’t feedback continuously in a typical llm setup, although I suspect that’s coming. But what we’re doing with llms is a basis of thought. I see no fundamental difference except scales between current llms and human brains.
Of course it’s creepy. Why wouldn’t it be? Someone programmed it to do that, or programmed it in such a way that it weighted those additions. That’s weird.

Technical term is the ELIZA effect.

In 1966, Professor Weizenbaum made a chatbot called ELIZA that essentially repeats what you say back in different terms.

He then noticed by accident that people keep convincing themselves it’s fucking concious.

“I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

- Prof. Weizenbaum on ELIZA.

ELIZA effect - Wikipedia

Just got shown that at a lecture yesterday… Hmm.
Mind blowing 🤯
You are alive.
It managed to capitalize the sentence!

Love the meme but also hate the drivel that fills the comment sections on these types of things. People immediately start talking past each other. Half state unquantifiable assertions as fact (“…a computer doesn’t, like, know what an apple is maaan…”) and half pretend that making a sufficiently complex model of the human mind lets them ignore the Hard Problems of Consciousness (“…but, like, what if we just gave it a bigger context window…”).

It’s actually pretty fun to theorize if you ditch the tribalism. Stuff like the physical constraints of the human brain, what an “artificial mind” could be and what making one could mean practically/philosophically. There’s a lot of interesting research and analysis out there and it can help any of us grapple with the human condition.

But alas, we can’t have that. An LLM can be a semi-interesting toy to spark a discussion but everyone has some kind of Pavlovian reaction to the topic from the real world shit storm we live in.

Hard problem of consciousness - Wikipedia

You’re committing a different sin, and it’s failing to consider that I’ve already played with these toys 6 years ago and I’m now bored with them.

Also, you’re on the fuckAI board, which is a place dedicated to a political position.

Agreed. He’s committed the sin of not realizing he’s in an echo chamber. How dare he try to have a rational conversation when people like petrol sniff king and I just want to cling to our tribalism! We’re right and there’s nothing you can to do convince us otherwise.