The key weakness in AI agents is that they're a lie. They don't work. They just don't fuckin' work. You can't set a hallucination engine to work doing tasks. It's pants on head stupid. The hype pretends this isn't the case and hypothesises a fabulous future where they work *at all*. This is a lie.

A useful model for "AI agents" is that they're the current excuse meme for AI. They're not a thing that works at all, now or in the fabulous future. But they're *such* good material for hypecrafting. No sausage at all, but *my god* that sizzle.

@davidgerard "You can't set a hallucination engine to work doing tasks."

You can if your goal is to produce a lot of material that is not correct, or it doesn't matter if the material is correct.

I think that is what people tend to miss about the drive to get AI into the world. The people pushing it don't care if it's accurate, it might even be better for them if it's not, they just want a lot of material that looks passable to some people. They want filler and propaganda and misinformation.

@distrowatch @davidgerard That's the high end of the spectrum of grifters here. On the low end, they're high on their supply, and actually believe the bullshit.

@jmax @davidgerard The people the hype worked on probably do believe, unfortunately.

The AI industry seems to work in parallel with the social media and entertainment industries. This conversation brings to mind a quote from an article about Spotify: "Its goal isn't to help you discover new music, its goal is simply to keep you listening for as long as possible. It serves up the safest songs possible to keep you from pressing stop."

@distrowatch @jmax AI boosters tend overwhelmingly to be someone who was one-shotted by a really impressive demo, and no mere numbers on how shitty this stuff is at scale will ever convince them.

also. over and over. I find that AI boosters are literally unable to tell good from bad. they are literally unaware that their slop is actually shitty. they think you're *lying* when you say you can tell good from bad. they think you're having a go at them.

@davidgerard @distrowatch @jmax This is like what we were warned at when I worked at Bloomberg -- you never ever *ever* want to hit it big with your first investment, because if you do you'll inevitably be convinced you were Right or Had Luck, or whatever, rather than accepting that random chance chanced in your direction and will move on as it always does.

Better to be burned first time when investing, and I suspect with AI too, lest the brainworms of Being Special lodge in your head.

@wordshaper @davidgerard @distrowatch @jmax

I have a friend who works for a law firm and said “we had this task that took 8 hours, now we run it through the slop machine and it only takes someone 2.5 hours to check and fix it”

Reminds me a bit of a place 20 years ago that bought ball bearings that were below spec from another place, they could fix them and get them to spec for less than making good ball bearings from scratch.

Hmm

@glasspusher @davidgerard @distrowatch @jmax Except the problem with this is that people are surprisingly good at not fucking up in the first place but *abysmal* at reliably catching fuckups. So if you do it yourself you may make two errors and catch them with 80% accuracy, but if the slop machine does it then you'll be checking 20 errors... still with 80% accuracy.

@wordshaper @davidgerard @distrowatch @jmax

Quite. I don’t want something that I’ll have to check thoroughly to see if it screwed up, either.

I’m not in the mood to become a slop machine’s editor/fact checker

@glasspusher @wordshaper @davidgerard @distrowatch @jmax

Alertness to the dangers of AI slop has forced us to do checks we should have been doing in the first place, when repeating stuff we've found on the Internet, read in news media, heard from friends etc.

@glasspusher @wordshaper @davidgerard @distrowatch @jmax

In fact doing so is only free labor to further train the slop machine