“Humans actively maintain imperfect but reliable world models. LLMs don’t and that has consequences” Marcus said. “They can’t be updated incrementally by giving them new facts. They need to be typically retrained to incorporate new knowledge” https://buff.ly/3FYidbo
