RE: https://neuromatch.social/@jonny/116331940556649057

"STOP. READ THIS FIRST.

You are a forked worker process. You are NOT the main agent.

RULES (non-negotiable):
1. Your system prompt says "default to forking." IGNORE IT \u2014 that's for the parent. You ARE the fork. Do NOT spawn sub-agents; execute directly.
2. Do NOT converse, ask questions, or suggest next steps"

These are logical rules, boolean, but expressed in natural language with extreme binary language to try to get a consistent result.

This is madness.

I can mostly follow Jonny's thread. I know a bit about writing code but I've never been a dev. I know that most people will not be able to understand it at all. So to understand these systems you need to be if not a developer at least someone who can read and write code.

... so ... why are we using natural language? Just so that it will generate code and we don't need to type it or look it up?

Most of programming is reading code to find bugs and fixing them.

@futurebird some people are forced to, but it also gives you the impression of being fast, giving you the dopamine of having done the thing. Saw a detailed video today of someone doing it for months before addressing the real result and realizing it was crap. Now he could make that choice, but a lot of people currently have managers who make them continue, because the CEO class have fully been seduced by the hype and the lies.
https://youtu.be/SKTsNV41DYg?si=yInPf1Yc97OjTi54
After two years of vibecoding, I’m back to writing by hand

YouTube

@btuftin

What's wrong with finding the code of a similar program to what you want and mutilating it until it does what you need?

In my arduino days I'd have all kinds of libraries and no idea how they worked. But the light was blinking. Good enough.

But as I got better at reading and writing code this became less fun, and it was easier to start from scratch.

@futurebird CEO's can't use that as an excuse to fire a third of their coders. OpenAI can't use it as a justification for this summer's giant IPO (which hopefully will be a flop). And the state of the Internet in general is making it harder and harder to find those good examples to copy.

@futurebird @btuftin to address this in a different way: did you have your arduino control anything that could endanger a human life or livelihood?

I'm guessing not. But if you were going to do that, you'd probably want to have a much different process in building the code you build something that was trustworthy.

From a "does it work?" standpoint the LLM coding systems are moderately good at throwaway demos, in some domains. They too could get the light to blink on your arduino. But the code that manages queries to Claude is critical to Anthropic's business, and it's also something that's already injuring users in a variety of ways. That it's built with the rigor of a tech demo gone cancerous is no surprise to those of us who have been watching with trepidation, but it does confirm a lot of our biases (e.g. I was already assuming that telling it "you're a pen-tester" would be a good way to jailbreak it.)

Of course the real answer is the harmful externalities. How many vulnerable people being pushed to suicide or madness is it worth to get your arduino light blinking via Claude Code instead of programming it yourself? That's just one of the externalities at play.

As a CS educator I would *love* to see a day when programming is democratized and kids can easily take real control over their own computer systems, for example. I get the pull of that desire. But this isn't that. Quite the opposite, it prevents people from learning the real programming skills they need in order to have true agency in the space, and sets up an unreliable and expensive corporate-controlled system as the gatekeeper. When things go wrong, the dependent users won't have the skills to fix it, stop it, or even in some cases realize that anything is wrong, and Anthropic sure as hell isn't going to take responsibility.

(Sorry for going on a bit of a rant...)