What is OpenClaw? An outer loop that wraps LLM engines, so they can manage their own context, and call arbitrary external commands to do things? I think people were already throwing the idea around in 2022, with every LessWrong thread saying it was "obvious" (with "Closed-Ended Quasi-Humans" being the logical conclusion), arXiv.org papers also benchmarked LLMs for tasks in simulated environments. So the contribution here is someone actually spent time and resource to implement it. Why is it considered revolutionary? I don't know. The publicity stunt of Moltbook definitely helped.
@niconiconi 北京地区上门500起了,盖子接单走起,还可以看谁不顺眼给他搞点后门顺手的事......
😘
