Depends, why do you believe you are seeing more often a particular word?
The reason defines whether it is apophenia or not. If you are delusional that it is an alien entity trying to communicate secret information to you in particular, by exposing you to a word more frequently, that’s apophenia. If you know it is the frequency illusion and just find it kinda funny how it feels, then it isn’t. Anyways, it is more often associated with the perception of patterns of causality in things that are random or banal. I’m of the opinion that this comic in particular is not a good representation of apophenia, other than the fact that the protagonist is certainly disconnected from reality.
I think it has gained new meaning beyond being a symptom for schizophrenia, such as the tendency for gamblers to believe they’re on a lucky streak or other illusions that trick the brain into seeing patterns that aren’t there.
Or the wikipedia article is wrong.
Okay, pareidolia is also a form of apophenia. You can “see” a face in a pile of rocks and be creeped out by it while still understanding that the pile of rocks is not actually a face. Belief doesn’t have to override contact with reality, it merely needs to be present.
A gambler feeling lucky might still understand that luck isn’t real, but the feeling persists.
First of all, I want to start by saying that as a psychologist I love when people correct me about things I’ve studied extensively. No better feeling.
That said. Yes, pareidolia and apophenia are related phenomena. However, the term apophenia is almost exclusively used in a psychiatric context (less so by economist). So, yes, Wikipedia can be and is often wrong. In this particular instance I can notice that the affirmation “Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli.” or “Pareidolia is a specific but common type of apophenia” as it appears today in the English article for apophenia, lacks any sort of source. They are related and we suspect they might come from the same underlying neural mechanisms, but they are distinctly different phenomena. To call one a type of the other is an epistemological error without any proper academic source to back it up.
I am, however, sure that in the context of internet discussions, my expertise is about as good as the perception of anyone who just learned about the word a few days ago.
Coincidentally, to believe adamantly, against any evidence or factual authority that pareidolia is apophenia might actually be classified as apophenia…
EDIT: Just noticed that one of the sources used by the wikipedia article quotes the wikipedia article to claim that apophenia is audio pareidolia. Ultimate circularity achieved. If the source is “Wikipedia said so”, you’ve lost the plot.
Seriously the sheer amount of people that equate coherent speech with sentience is mind boggling.
All jokes aside, I have heard some decently educated technical people say “yeah, it’s really creepy that it put a random laugh in what it said” or “it broke the 4th wall when talking”… it’s fucking programmed to do that and you just walked right in to it.
Oh my goddd…
Honestly, I think we need to take all these solipsistic tech-weirdos and trap them in a Starbucks until they can learn how to order a coffee from the counter without hyperventilating.
The difference is knowledge. You know what an apple is. A LLM does not. It has training data that has the word apple is associated with the words red, green, pie, and doctor.
The model then uses a random number generator to mix those words up a bit, and see if the result looks a bit like the training data, and if it does, the model spits out a sequence of words that may or may not be a sentence, depending on the size and quality of the training data.
At no point is any actual meaning associated with any of the words. The model is just trying to fit different shaped blocks through different shaped holes, and sometimes everything goes through the square hole, and you get hallucinations.
You think you are saying things which proves you are knowledgeable on this topic, but you are not.
The human brain is not a computer. And any comparisons between the two are wildly simplistic and likely to introduce more error than meaning into the discourse.
but it’s essentially the same thing, just much much more complex
If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷♂️
In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.
Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.
It’s not a difficult question.
LLMs do not understand things.
show emergent abilities
Immediate mark of a someone who is deceiving or has been deceived.
And so if it looks like intelligence, then it is intelligence
Wow, you mean I can understand Chinese?
If you don’t see the new things that computers can do with ai, then you are being purposely ignorant. There’s tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn’t have before.
And yes, if you can process written Chinese fully and respond to it, you do understand it.
And yes, if you can process written Chinese fully and respond to it, you do understand it.
Understanding is when you follow instructions without any comprehension, got it 👍
You can tell a person to think about apples, and the person will think about apples.
You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.
You misunderstand. The outcome of asking an LLM to think about an apple is the token ‘Okay’. That is the sum total of its objective. It does not perform a facsimile of human thought; it performs an analysis of what the most likely next token would be, given what text existed before it. It imitates human output without any of the behavior or thought processes that lead up to that output in humans. There is no model of how the world works. There is no theory of mind. There is only how words are related to each other with no ‘understanding’. It’s very good at outputting reasonable text, and even drawing inferences based on word relations, but anthropomorphizing LLMs is a path that leads to exactly the sort of conclusion that the original comic is mocking.
Asking an LLM if it is alive does not cause the LLM to ponder the possibility of whether or not it is alive. It causes the LLM to output the response most similar to its training data, and nothing more. It is incapable of pondering its own existence, because that isn’t how it works.
Yes, our brains are actually an immensely complex neural network, but beyond that the structure is so ridiculously different that it’s closer to comparing apples to the concept of justice than comparing apples to oranges.
Technical term is the ELIZA effect.
In 1966, Professor Weizenbaum made a chatbot called ELIZA that essentially repeats what you say back in different terms.
He then noticed by accident that people keep convincing themselves it’s fucking concious.
“I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
- Prof. Weizenbaum on ELIZA.
Love the meme but also hate the drivel that fills the comment sections on these types of things. People immediately start talking past each other. Half state unquantifiable assertions as fact (“…a computer doesn’t, like, know what an apple is maaan…”) and half pretend that making a sufficiently complex model of the human mind lets them ignore the Hard Problems of Consciousness (“…but, like, what if we just gave it a bigger context window…”).
It’s actually pretty fun to theorize if you ditch the tribalism. Stuff like the physical constraints of the human brain, what an “artificial mind” could be and what making one could mean practically/philosophically. There’s a lot of interesting research and analysis out there and it can help any of us grapple with the human condition.
But alas, we can’t have that. An LLM can be a semi-interesting toy to spark a discussion but everyone has some kind of Pavlovian reaction to the topic from the real world shit storm we live in.
You’re committing a different sin, and it’s failing to consider that I’ve already played with these toys 6 years ago and I’m now bored with them.
Also, you’re on the fuckAI board, which is a place dedicated to a political position.