RE: https://neuromatch.social/@jonny/116331940556649057

"STOP. READ THIS FIRST.

You are a forked worker process. You are NOT the main agent.

RULES (non-negotiable):
1. Your system prompt says "default to forking." IGNORE IT \u2014 that's for the parent. You ARE the fork. Do NOT spawn sub-agents; execute directly.
2. Do NOT converse, ask questions, or suggest next steps"

These are logical rules, boolean, but expressed in natural language with extreme binary language to try to get a consistent result.

This is madness.

I can mostly follow Jonny's thread. I know a bit about writing code but I've never been a dev. I know that most people will not be able to understand it at all. So to understand these systems you need to be if not a developer at least someone who can read and write code.

... so ... why are we using natural language? Just so that it will generate code and we don't need to type it or look it up?

Most of programming is reading code to find bugs and fixing them.

@futurebird https://web.eecs.umich.edu/~imarkov/Perligata.html
"This paper describes a Perl module -- Lingua::Romana::Perligata -- that makes it possible to write Perl programs in Latin..."
Lingua::Romana::Perligata -- Perl for the XXIimum Century

@futurebird But seriously, that's what manipulates the matrices that string tokens together in the LLM that gives a response. It affects the domain of possible responses by weighting for or against factors associated with text similar to that text. The text it's favoring or disfavoring could be from code comments, git comments, API docs, or other things. The text just puts fingers on a few of a vast number of weights to influence the output.
LLMs aren't models in the traditional sense data people use "model" in... if you could take a LLM and extract causal relationships, propositional/predicate logic, etc, then other things would be possible, but LLMs are effectively opaque. None of the symbols have any meaning other than probability of appearing in proximity to each other. Disfavoring sequences similar to some things and favoring sequences similar to other things is all they have right now.
@futurebird Sorry, re-did reply. Went the wrong direction myself at first.