@codinghorror The question is not what the 200 lines of code do, they are merely like the handful of neurotransmitters in out biology. The question is what’s going on in the billions of model parameters that seem to encode not only word patterns but also some algorithms for behavior.
The issue of needing to see more training examples than humans need has been addressed by pointing to the work evolution has done in shaping humans that we need to do brute force for LLMs. 2/
@codinghorror And yes, there *are* issues with lack of grounding in the physical world.
I don’t think LLMs are synthetic humans or have emotions, but once we run these things with persistent state and long continuous loops, I expect the results to start to resemble humans a lot more.