Seriously the sheer amount of people that equate coherent speech with sentience is mind boggling.

All jokes aside, I have heard some decently educated technical people say “yeah, it’s really creepy that it put a random laugh in what it said” or “it broke the 4th wall when talking”… it’s fucking programmed to do that and you just walked right in to it.

And people are programmed to talk like that too. It’s just a matter of scale.

The difference is knowledge. You know what an apple is. A LLM does not. It has training data that has the word apple is associated with the words red, green, pie, and doctor.

The model then uses a random number generator to mix those words up a bit, and see if the result looks a bit like the training data, and if it does, the model spits out a sequence of words that may or may not be a sentence, depending on the size and quality of the training data.

At no point is any actual meaning associated with any of the words. The model is just trying to fit different shaped blocks through different shaped holes, and sometimes everything goes through the square hole, and you get hallucinations.

Our brains just get signals coming in from our nerves that we learn to associate with a concept of the apple. We have years of such training data, and we use more than words to tokenize thoughts, and we have much more sophisticated state / memory; but it’s essentially the same thing, just much much more complex. Our brains produce output that is consistent with its internal models and constantly use feedback to improve those models.

but it’s essentially the same thing, just much much more complex

If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
What’s funny is people thinking their brain is anything magically different than an organic computer.

In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.

show emergent abilities

Immediate mark of a someone who is deceiving or has been deceived.

And so if it looks like intelligence, then it is intelligence

Wow, you mean I can understand Chinese?

Chinese room - Wikipedia

If you don’t see the new things that computers can do with ai, then you are being purposely ignorant. There’s tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn’t have before.

And yes, if you can process written Chinese fully and respond to it, you do understand it.

And yes, if you can process written Chinese fully and respond to it, you do understand it.

Understanding is when you follow instructions without any comprehension, got it 👍

You have to understand instructions on some level to be able to follow them. 👍🏻