Before I present my counter thought experiment, I just want to be upfront and declare that I don’t know if a digital machine can ever have mind or conciousness, but it seems to be me, it can have understanding.
https://www.youtube.com/watch?v=tBE06SdgzwM
Chinese Room, Version 2.0
In this version of the experiment:
The man in the room starts with an empty rule bookWhen information I (a stream of binary data) comes in from person A on the left side of the box, the man in the room (R) generates some random stream of binary data O and passes it to person B on the right side of the boxPersons A and B share discuss I and O and decide if O was the correct response to IIf the answer was yes, they let R know his response was correctR adds an entry to his rule book that his conversion of I to O was correctIf the answer was no, they let R know his response was incorrectR adds an entry to his rule book that his conversion of I to O was incorrectPersons A and B can change for each run of this process.GOTO 2.This process can continue for as long as possible, say 2^10^100 times.
Somewhere during this process, the rule book of R has become large enough, that R, who didn’t understand Chinese at first, now understands Chinese and can always return the correct O for every I.
In fact, R can associate a probability number to each entry in his rule book, and randomly choose the most probable answer as O or responsd to each I with multiple correct (i.e. high probability) answers/interpretations as O.
In short, this is, analogous to reinforcement learning (RL).
Hence, digital machines, given enough time and capacity for memory, can eventually understand things. That’s, the semantic meaning of those things—those symbols.
-BSZ
https://masterboy.vivaldi.net/2026/02/02/a-counter-tought-experiment-to-john-searles-chinese-room-thought-experiment/
#AI #Conciousness #Mind #Understanding