It remains wild that all of these tools - MCP, OpenClaw, not just the entire AI stack but the entire ecosystem - don't think that security matters at all. Validation, sanitization, authentication, none of it. Call it 'autonomous' and yolo it out there for VC clout.

There's relearning the lessons of 90's computing, and then there's the lessons of 1850s London sanitation, of Roman-era waste management. Everyone just lets their cattle shit in the town well, it'll be fine.

https://social.coop/@cwebber/116154573042963148

@mhoye It really makes me feel like a boomer for giving half a fuck about security. But along with that feeling there's a faint sound of coins jingling in the distance.

@mhoye yeeeep.

I mean these things have learned from github and other places where all kinds of code - including all sorts of stuff never intended for production and only for messing around - can be found.

It's terrible at accessibility, security, privacy...

@mhoye
Also you can say that the idea is to produce shit on the industrial scale.

But the more serious problem is that due to the design of a LLM you can't secure them, as they can't discern between instructions and data.

@mhoye @cwebber yeah, I don’t understand it. The security can’t rely on the LLM, otherwise it can be manipulated by conflating data with instructions!