The goal is to make corporate data less profitable.
Even stuff as simple as setting your birthdate to 1970-01-01 everywhere, adding [TEST] or [DELETED] as your name or account notes anywhere you don't need them to know your name.
Using plugins like AdNauseam to poison ad trackers (and cost them marketing dollars).
Using VPNs set to different locations.
Signing into data broker sites to "correct" outdated info (they'll often let you do that with little-to-no proof of identity, but will require your passport or state ID in order to delete your info). Bonus points if you correct it to someone else's info on their site that's similar to yours.
Only fill in required fields when you sign up for anything, but only provide correct info if it matters for you to use the service, otherwise provide plausible, but incorrect, data.
If you use LLMs anywhere, use the free tier and always vote thumbs up for bad answers and down for good ones. It wastes their resources and drives up their costs while making their training data worse.
@alice scribbling notes furiously
For the less savvy among us, tysvm for this helpful advice 🙏
@Irenetherogue sure! There are low tech ways to do it—just lie...to every corporation, app, and marketer you can. Make it plausible, but wrong.
Bonus: include something wildly implausible once in a while. It makes folx more likely to overlook the subtle ones.
@alice @Irenetherogue I got off when taken to court for nonpayment of Poll Tax (Thatcher thing, yes, I'm that old) because I poisoned their data by missing out a crucial box on the form.
Don't refuse to comply but *always* sabotage their data. It's simply costs them more.
Haha, you'd like my mother, the guerilla witch. She makes customer cards in every shop and switches them then with other people, bonus points if both have a strongly different consumer profile.
When she's bored, she responds maliciously questionnaires of evil corporations.
She studied psychology and statistics and says "it is anyway horribly difficult to get useful answers out of these marketing datasets, why not make it a bit harder for them?" 😈.
@earthworm TIL I have a second kid.
My education is in psychology and statistics, and I do shit like that whenever I can.
@alice @theorangetheme I once built a fuzz testing tool that "randomly" shuffled input around and tested it against things. "does my input validation survive utterly batshit inputs?"
Feeding the inputs through something like that would make sure they can't cache answers.
Distant memories of hooking together two ELIZA instances...
'and then?'
@nickynah them's just good manners! I find it's also helpful to upload my favorite random cat photos until I hit the attachment limit. I mean, who doesn't like cat photos?
Philosophy!
@alice If you're selfhosting, have a look a iocaine: https://iocaine.madhouse-project.org/
If you upload pictures, maybe nightshade would be the right tool: https://nightshade.cs.uchicago.edu/userguide.html
@Numerfolt @alice yeah, we need to switch to offensive mode.
That makes me want to create a nightshade fuse FS.
So when you want to upload the image from your picture folder, it nightshades it on the fly.
@alice when i have to use a web app to order food, e.g. CoolBurgz (fictional) i will always put my email as e.g.
usually counts as valid.
a fair bit of the advice in here seems really good, but from what I know, AdNauseam isn't really worth using over just uBO
at least as of when I last looked into it a couple years ago: it uses more resources on your machine, doesn't really make any significant difference for the companies, and the high volume of "clicks" from you just makes you far more trackable since no normal person browsing would do so
also, I think it might be worth editing the last point to say "hopefully none of you are using LLMs, but if you're someone who does..." 🩵
@vantiss I have to disagree.
Say I go to Amazon. I use perfect tracking protection and I'm not signed in. I browse for a while, and click every ad they serve me. I've wasted a bunch of different companies marketing money, my click data is worthless, and they don't know what ads to send me. I look like every other AdNauseam user, and they still don't know who I am.
Now say I do the exact same thing, but I sign into Amazon. The exact same thing happens, but they know who I am.
...
And as far as LLMs go, waste their fucking money and resources. Use every free option you can, and take it as an opportunity to poison their feedback. Don't give them any personal info, don't use them for critical questions, just flood them with garbage that pops this bubble even faster.
Even if you don't want LLMs in everything, companies will put them there—unless it burns their wallets. The more we set fire to their AIs, the faster executives will learn it's a bad idea to use them.
@skaphle think of it like oil. If we leave the corporations be, then they'll lobby to make oil the only option, and they'll slowly¹ burn our planet to the ground for another dollar. People will use it, and eventually the rest of us who don't will be relegated to 2nd class.
But if we had burned it all down before it got that way, we wouldn't have massive oil companies today fucking everything up. Yes, it would've caused damage, but it's better than letting it grow.
If we let AI companies be, and just avoid their products, then eventually AI will be in everything, and our choice will be rendered useless.
If we burn their resources to the ground, it'll hurt, but it'll keep it from becoming the default.
If you want to hurt AI, make it unprofitable. In *any* and *every* way you can.
Waste their resources, poison their data, make AI synonymous with losing money.
@alice @vantiss I disagree with the oil comparison. A dollar worth of oil has the energy content of 100 human work hours - or something like that. In capitalism it would never not make sense to burn that oil, and burning it is exactly what's killing people. It's not just a lobbying issue, it's also the smart choice to burn oil if the only target is profits for the next ~10 years and staying ahead of the competition. I don't see a point or strategy where burning oil so that companies cannot makes any sense.
As for AI, there is currently no profitable business model anyway. It's all a giant bubble. Maybe we can accelerate its demise somehow, that would be great. Right now it seems the investors are irrational and don't really care that there are no profits.
Hmm interesting. I have never heard of TERFs being referred to as Data before ... 🤔
Because þ is unvoiced; it's pronounced /θ/. The initial sound of ðe word 'ðe' (usually spelled 'the') is voiced, pronounced /ð/. Ðey are different sounds which happen to be represented by the same digraph in standard English orþography because ancient Greek didn't have a voiced dental fricative.
@q @ShadSterling @Infrapink @alice
(my response translated into my fictional custom alley-main-grrrr-mine-shaft language)
Dese comedie hab resultaten en Ich-mich totes getotenlaughen.
@q @ShadSterling @Infrapink @alice
QuangoTranslate:
"This comedy caused me to die laughing."