
Support CleanTechnica's work through a Substack subscription or on Stripe. More than 10,000 Starlink satellites currently orbit the Earth. We see them crawling across dark skies, no matter how remote our location, and streaking through images from research telescopes. SpaceX recently announced that it wants to launch one million more of these satellites as orbital data centres for ... [continued]
Apple has enabled age verification in UK following 26.4 update
https://www.reddit.com/r/privacy/comments/1s32glp/psa_apple_has_enabled_age_verification_in_uk/
#Apple #iPhone #AgeVerificarion #IDVerification #privacy #surveillance #dystopia #technology #UK
So it hit me today that we've got a rudimentary Universal Translator with AI now, and that got me thinking.
I live in a multicultural city where there's a high chance of hearing several different languages when you walk down the street. I've noticed it's a kind of privacy layer in a lot of cases, I can sit on a bus, someone will answer their phone, and if they speak in a language I don't understand it's easier to let it sink into the background noise rather than draw my focus. And often people will talk amongst themselves in their own language, confident that hardly anyone or no-one will be understanding what they're saying, able to occupy to their own little bubble.
So imagine now being in a near future dystopian science fiction setting where people invent their own language, trained a portable LLM and recording device on the data, and then allowing tiered access permissions as a kind of encryption layer. The recording device can be set to only pick up your own voice, or be directly embedded to detect thought patterns that constantly update the model in real time. Other people need access with a key you decide to share. There's a total block setting so nobody can access. Strangers might start from scratch, training each other's models from no data with basic permissions. Acquaintances or casual friends might have partial, lossy access. Full, high resolution access is reserved only for the ones you trust, a kind of instantaneous deep sync. Misunderstanding is default, understanding is opt-in.
There might be a family language that children grow up learning before diverging and modifying that into their own language. Or creating one that's entirely their own.
Institutions might have their own language, separate from the personal language, one employees are required to have on in full resolution while on site together (or remotely networking) with company restrictions and filters in place. You'd be perfectly understood, but within an approved framework. There might be legal requirements to give employees enough time off work that they don't become 'absorbed' into the company model. But some shady employers might try to game the system by encouraging habits and phrases that persist even when the company AI is technically off. Becoming 'a company man' or 'married to the job' gets taken to another level.
Underworld organisations would go hard on the assimilation aspect. A little bit like the 'Ghost Hacking' in Ghost in the Shell, but less of a one-time invasion, rather opting in to something more insidious.
Therapy in this setting would be more like de-programming, on the level of meaning structures rather than just beliefs. In extreme cases, brain surgery might be required... which some might refuse because it would be like losing who they are. The risk would be losing memories and personality traits that grew around that system.
Another thing that's already happening now that LLMs are part of the world, is that people are introducing typos and spelling errors to signal humanity. So in the near-future setting, this might be called 'grains' or 'granularity' to make oneself harder to model. Going 'low-grain' would mean someone is assimilated into an imposed model. There would be a constant low key awareness that one could be recorded at any time.