@futurebird @btuftin to address this in a different way: did you have your arduino control anything that could endanger a human life or livelihood?
I'm guessing not. But if you were going to do that, you'd probably want to have a much different process in building the code you build something that was trustworthy.
From a "does it work?" standpoint the LLM coding systems are moderately good at throwaway demos, in some domains. They too could get the light to blink on your arduino. But the code that manages queries to Claude is critical to Anthropic's business, and it's also something that's already injuring users in a variety of ways. That it's built with the rigor of a tech demo gone cancerous is no surprise to those of us who have been watching with trepidation, but it does confirm a lot of our biases (e.g. I was already assuming that telling it "you're a pen-tester" would be a good way to jailbreak it.)
Of course the real answer is the harmful externalities. How many vulnerable people being pushed to suicide or madness is it worth to get your arduino light blinking via Claude Code instead of programming it yourself? That's just one of the externalities at play.
As a CS educator I would *love* to see a day when programming is democratized and kids can easily take real control over their own computer systems, for example. I get the pull of that desire. But this isn't that. Quite the opposite, it prevents people from learning the real programming skills they need in order to have true agency in the space, and sets up an unreliable and expensive corporate-controlled system as the gatekeeper. When things go wrong, the dependent users won't have the skills to fix it, stop it, or even in some cases realize that anything is wrong, and Anthropic sure as hell isn't going to take responsibility.
(Sorry for going on a bit of a rant...)