Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
Bcachefs creator claims his custom LLM is 'fully conscious'
https://piefed.social/c/linux/p/1815630/bcachefs-creator-claims-his-custom-llm-is-fully-conscious
don’t LLMs generally already fail at the learning stage of Intelligence?
once trained, they never learn again? It just sometimes seem like they are learning, as long as the learned thing is still within their “context window”, so basically it’s still within their prompt?
In another matter, how would we evaluate actual intelligence with LLMs? Especially remembering that all of the slop-companies would immediately try to cheat the test.
but these are still… prompt extensions (not sure if there is a technical word for it), right?
that’s a neat workaround for context windows, but at the core, imho any intelligence must be able to learn, and for a neural net to learn, it must change the network, i.e. weights or connections.