Looking for opinions about OpenMind OM1 as a source for reusable #Robot_Intelligence. I run 3 robots - 2 #GoPiGo3 robots and a #TurtleBot4 (WaLI - Wallfollower Looking for Intelligence).

Robots need a way to share and “inherit” knowledge and abilities. OM1 is an open source robot domain transferable “brain” based on a trained #LLM. I don’t know how to evaluate the usefulness of the model’s knowledge and how much NLU to Turtlebot4 interface code I will have to write to use #OpenMind_OM1.

So, can large language models play text games well? 🤔 Apparently, it takes a village (aka the Simons Foundation and a bunch of contributors) to figure out something a teenager already knows by instinct. 🎮 Spoiler alert: the answer is buried somewhere between a lot of numbers and acronyms that only a robot could love. 🤖
https://arxiv.org/abs/2304.02868 #large_language_models #text_games #AI_research #Simons_Foundation #gaming_insights #robot_intelligence #HackerNews #ngated
Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions

Large language models (LLMs) such as ChatGPT and GPT-4 have recently demonstrated their remarkable abilities of communicating with human users. In this technical report, we take an initiative to investigate their capacities of playing text games, in which a player has to understand the environment and respond to situations by having dialogues with the game world. Our experiments show that ChatGPT performs competitively compared to all the existing systems but still exhibits a low level of intelligence. Precisely, ChatGPT can not construct the world model by playing the game or even reading the game manual; it may fail to leverage the world knowledge that it already has; it cannot infer the goal of each step as the game progresses. Our results open up new research questions at the intersection of artificial intelligence, machine learning, and natural language processing.

arXiv.org