The key weakness in AI agents is that they're a lie. They don't work. They just don't fuckin' work. You can't set a hallucination engine to work doing tasks. It's pants on head stupid. The hype pretends this isn't the case and hypothesises a fabulous future where they work *at all*. This is a lie.

A useful model for "AI agents" is that they're the current excuse meme for AI. They're not a thing that works at all, now or in the fabulous future. But they're *such* good material for hypecrafting. No sausage at all, but *my god* that sizzle.

@davidgerard Maybe I'm misunderstanding something, but for what I understood, it's basically the same LLM stuff but in the background?
Basically, if you roll the dice enough times, you might get something that passes all the unit tests?
(And burn a whole bunch of tokens in the process...)

@art_codesmith @davidgerard

It's when you give the LLM access to APIs.

So instead of asking 'propose some code to do x using API y' , it gets to run the proposed code directly

@Zamfr @davidgerard I feel like having a hallucination machine do anything with an API without human oversight is significantly less than ideal.
@art_codesmith @davidgerard Yes indeed, but if you call it 'agent' it will be much better of course.