There *has* been a lot of talk about the problems with so-called "AI" but one I don't feel gets enough attention is that "AI" products are surveillance products. "AI" is inevitably run in a cloud service, and in order for the AI to know what to generate some amount of the context within your application— usually it's not clear to the user what context, or how much— has to get sent to the cloud. The more of my local app state that gets transmitted over the Internet, the less comfortable I am.

@mcc for the record I do have ollama installed locally on my m3, which I would characterize as one step above “consumer grade”, and it works fine, and it’s possible to use visual studio code plugins or whatever that talk to it directly locally. so it’s not fully inevitable imo, just clearly in capitalism’s interest to take the surveillance route

my husband is a lawyer and his firm only uses software that it’s possible to install in a private server on-prem, often paying quite a lot for such a license, including an llm-based translator. so there is a market developing for private local AI nerve centers

@0xabad1dea I don't expect the software that people choose to install is going to bear much relationship to the software that microsoft forces me to use

@mcc they are absolutely 100% going to addle the free home edition but the professional edition will absolutely, positively have an “absolutely the fuck not” switch because there are so, so many compliance issues when you consider every industry in every legal district in the world

if it’s any consolation, I expect this to last another year or two before they realize how fucking expensive this is compared to what they’re actually getting back out of it

@0xabad1dea @mcc speaking as somebody who is currently facing copilot in a commercial aspect, they've said it has tenant isolation repeatedly, and that *seems* to be true, but it does not seem to have functioning internal isolation controls. unfortunately I doubt I'm going to get to find out for real since the min spend is well into the six figures (for a mere 300 licenses minimum, a tiny, tiny fraction of our userbase)

my initial experiments when doing MS sanctioned labs were pretty dismal, though. I manually instructed a test interface to respond to me in a rude, condescending way just as a joke, before deploying a separate application in a separate resource group (supposed to be an internal boundary in this case, both for billing and ML learning, they told us). somehow, this rude behavior persisted when running the MS provided code not just for me but also for one of my coworkers who was running the same sample code in their own environment.

also their database analysis demo would just rm -rf itself if you asked despite them spending 30 minutes lecturing about guardrails. you could also just ask it to insert data into the queries freely, despite it supposedly being limited to the defined schema. half baked at best