@mcc I'm sure it will already preprocess your data before you press the button.
But otherwise, local LLMs are also a thing, but not reaching to the same level as GPT-4 and also they take a lot of resources.
@slyecho If this is the case, it just becomes a matter of reversing the "preprocessing".
In the case of Microsoft, I think we can assume that the "AI" will never be local and as much as possible will occur on the server side, because Microsoft's entire motivation in pushing "AI" this way is to generate business for Azure.
@slyecho @mcc The really scary part of all of this to me is that if Microsoft is announcing this now, it means all of the contracts have been signed, funds have been committed and software has been deployed. It's too late for us puny users to say "Hell, NO!" 😡
Maybe the EU can push back like they did when Microsoft embedded Internet Exploiter in Win95, but in the end, even that would surprise me.
@mcc It is losing them money right now, massive resource costs, because if users use it without an additional subscription, MS will pay it themselves.
But also look at all the AI accelerators AMD and Intel are already putting in their CPUs. Windows 12 gonna have like 32 GB RAM minimum or something lol. Also not impossible that there will be a subscription fee added on.
And conveniently enough, all your file are already in OneDrive and were indexed by the AI by launch, just in time to answer questions about all your juicy personal life!
@aud @mcc "bolstering their Azure business" just doesn't make sense when they are the Azure customer. Well, what makes sense is that the cost for themselves is lower than for third-parties.
What they are doing now is trying to race ahead of the competition and hope that the customer base will get hooked on the tech. Monetization comes later.
But still there is the fact that CPU manufacturers are putting NPUs into their new products means that there is a demand for them coming from somewhere. The models that run locally may not be the big ones that run in Azure but they may be part of some kind of system that will take advantage of it. I think maybe speech recognition, translation, speech synthesis running locally for lower latency could be one of those.