"The brain represents information probabilistically, by coding and computing with
probability density functions, or approximations to probability density functions"

~ Knill and Pouget, The Bayesian Brain, 2004, Trends in Neuroscience

For all those saying #ChatGPT is just a mere parlor trick, at least some neuroscientists seem to think that we ourselves are performing a very similar parlor trick, except on a massively complicated scale.

#AI #artificialintelligence #philosophy #neuroscience

@rachelwilliams

As they often do laymen confuse scientists' models of how the natural world works with how the natural world works, not helped by the fact scientists often make the same mistake.

Models are NOT reality just close approximations that are widely inaccurate where it matters.

#philosophy #cognition #neuroscience

@rachelwilliams

Forgive me if this is a basic question, but do Bayesian brain approaches allow for any elements of determinism within the model?

If not, should we consider the pretty systems of differential equations used to model action potentials unfruitful?

@rachelwilliams The main problem I have with these kinds of models is that they scale terribly. My under standing is they scale at power 11, meaning that doubling in performance would require 2000 times the compute. If that is remotely true it would take about two decades for every doubling of performance. Considering that what the models are capable of is improving at a moderate pace, even with evergrater resources, it doesn't seem unreasonable that it might be true.
@ekg do you think that, theoretically, quantum computers could solve the compute scaling problem?

@rachelwilliams My current understanding is that quantum computers removes at best one power, that would be power ten or equivalent to a doubling every decade or so. Quantum computers has huge effect at power two where they can turn the scaling linear.

Regardless its a huge engineering effort.

@rachelwilliams
That's what I found fascinating when I saw #chatGPT attempting a mathematical proof: https://tech.lgbt/@brocolie/109485627951984509

It failed, but claimed that it succeeded. It did so by going step by step, each step was correct, but then misinterpreting the (useless) result of its deductions.

As a former math student, who had to hand in proofs regularly, it reminded me of times I knew what I had to proof, but didn't know how. 1/2

brocolie :heart_nb: :heart_bi: (@[email protected])

Attached: 2 images · Content warning: ChatGPT

LGBTQIA+ Tech Mastodon

@rachelwilliams

The point is, as you have said, that you may trick yourself, and others who don't have substantial knowledge, using these techniques. And so #chatGPT is like a mirror to what we as humans find convincing, at least at a first glance, about various forms of expressions: essays, poems, even proofs. The same for AI that produces pictures.

This could be a source for introspection. What are our biases when reading text, that make us accept a claim?

@rachelwilliams I agree with your statement.

But I also agree with people that try to explain that it is "only" a statistical model that predicts the next couple of words giving a sequence of previous words.

Some people seem to see it as a kwnolage database. Which I don't find ideal.

I like to use #chatgpt but I would not trust its anwsers.

@lokimidgard Touche. But I guess the line of reasoning I am pushing is to get people to open up to the possibility that we ourselves as humans are also "only" implementing a statistical model, albeit of a degree of complexity that is so mind-boggling so as to appear as the "magic" stuff of meaning, intentionality, etc. Also, I daresay humans might be slightly anthropocentric when it comes to defining the nature of intelligence. Regardless of the answer, none of this is science. It's philosophy.

@rachelwilliams And I totally agree, somone could say we are 'just' a biological machine.

So if something like a soul exists, which I personaly think is true whatever it may be, I see no reason why a machine shouldn't have one. But I think GPT isn't there (yet).

But I wouldn't say we are anthropocentric when it comes to intelligence. We are anthropocentric with anything. Anything that's positiv, apperently there are still people that think humans can't change climate…