The key weakness in AI agents is that they're a lie. They don't work. They just don't fuckin' work. You can't set a hallucination engine to work doing tasks. It's pants on head stupid. The hype pretends this isn't the case and hypothesises a fabulous future where they work *at all*. This is a lie.

A useful model for "AI agents" is that they're the current excuse meme for AI. They're not a thing that works at all, now or in the fabulous future. But they're *such* good material for hypecrafting. No sausage at all, but *my god* that sizzle.

@davidgerard I don’t know what you base this on. It most definitely work for some things, at least some of the time.

For example, someone I know needed to do some stuff with an Arduino to make it show a pretty wave pattern using unevenly distributed led lights. This person was a crafter, not a coder. However, by oploading a hand drawn picture of where the leds where placed it generated a web based simulator with sliders to tweak parameters. Then, when he was happy with the result after tuning the sliders, it generated code for the arduino that compiled and ran perfectly. First try.

It might have been an incredible amount of luck, but this non-technical person got his art project to work without needing to learn anything about code.

@gigantos @davidgerard "works for some things, at least some of the times" is NOT the way these LLM tools are being pitched. I think there would be much less of a backlash if OpenAI and co were like "hey, here's an occasionally useful tool for generating text and here are the use cases it's actually good at," rather than "fire all your employees and replace them with AI, who cares if it's fit for purpose!"
@Avner @gigantos @davidgerard if they would be honest there would be no money to make and no bubble to hype.