Ironically through playing with #ChatGPT, I recently learned of a recent branch of philosophy named "The Philosophy of Action". I didn't realize that this is what I've been noodling for a long time ever since reading Dennett's "Elbow Room".

Here are two books I've picked up to learn more - "Philosophy of Action, A Contemporary Introduction" by Sarah K Paul, and "Philosophy of Action, An Anthology" edited by Dancy and Sandis.

We've been hit hard by #LLMs and the strange implications - exciting, scary, benign, overhyped. Maybe what we are grappling with is new. "Intelligence Without Agency" perhaps, a brain in a jar that we've never gotten to play with before outside our imagination.
#babyagi #autogpt and the like try to mix in some event loops and ways to interact with the world, edging into more ingredients of Agency. Mixing in rule databases, logic engines, or things like this tree-of-thought (https://arxiv.org/abs/2305.10601) take further steps.
Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: https://github.com/princeton-nlp/tree-of-thought-llm.

arXiv.org

@awwaiid recall LLM is just a probability table.

There is no intelligence.

Adding an expert system filter(s) to the mix might help. I think that's what we called ai before llms came along.

I think intelligence implies learning.

I'm not sure how much learning llms can do.

Alpha go had some very specific "learning". But it just "taught" itself

@awwaiid Ave my reply to the first post is basically this :-)

Fact checking an LLM is basically what is needed.

I don't think an LLM can even guess what domain of knowledge to access.

I'm not sure an expert system can. Maybe a learning model can