if anyone ever spots one of these, let me know.
How (racist/sexist/whatever) harassment on Mastodon works:
1. Harasser replies to their target's post, with the reply set to "followers only", saying the most vile stuff you can imagine.
2. All the harasser's followers join in on the harassment, posting more vile stuff.
3. Nobody but the target and the harassment crew can see the vile stuff that was said.
4. Target is traumatized. Nobody else can see why.
5. Everybody says "I don't see it so it's not happening."
The company I work for (well, one of them), Outpost, has migrated its servers and databases to the EU.
Read more about it.
https://outpost.pub/you-outpost-moved-to-the-e-u/
Outpost is used by publishers like 404 Media, Aftermath, Mothership.blog, Platformer and many more to turn on superpowers for growing your Ghost site
I'm writing this in English.
Not because English is my first language—it isn't. I'm writing this in English because if I wrote it in Korean, the people I'm addressing would run it through an outdated translator, misread it, and respond to something I never said. The responsibility for that mistranslation would fall on me. It always does.
This is the thing Eugen Rochko's post misses, despite its good intentions.
@Gargron argues that LLMs are no substitute for human translators, and that people who think otherwise don't actually rely on translation. He's right about some of this. A machine-translated novel is not the same as one rendered by a skilled human translator. But the argument rests on a premise that only makes sense from a certain position: that translation is primarily about quality, about the aesthetic experience of reading literature in another language.
For many of us, translation is first about access.
The professional translation market doesn't scale to cover everything. It never has. What gets translated—and into which languages—follows the logic of cultural hegemony. Works from dominant Western languages flow outward, translated into everything. Works from East Asian languages trickle in, selectively, slowly, on someone else's schedule. The asymmetry isn't incidental; it's structural.
@Gargron notes, fairly, that machine translation existed decades before LLMs. But this is only half the story, and which half matters depends entirely on which languages you're talking about. European language pairs were reasonably serviceable with older tools. Korean–English, Japanese–English, Chinese–English? Genuinely usable translation for these pairs arrived with the LLM era. Treating “machine translation” as a monolithic technology with a uniform history erases the experience of everyone whose language sits far from the Indo-European center.
There's also something uncomfortable in the framing of the button-press thought experiment: “I would erase LLMs even if it took machine translation with it.” For someone whose language has always been peripheral, that button looks very different. It's not an abstract philosophical position; it's a statement about whose access to information is expendable.
I want to be clear: none of this is an argument that LLMs are good, or that the harms @Gargron describes aren't real. They are. But a critique of AI doesn't become more universal by ignoring whose languages have always been on the margins. If anything, a serious critique of AI's political economy should be more attentive to those asymmetries, not less.
The fact that I'm writing this in English, carefully, so it won't be misread—that's not incidental to my argument. That is my argument.
Der Berliner Senat will das IFG massiv einschränken. Heute diskutiert der Digitalausschuss über die Neuregelung. In ihrem Statement macht die Berliner Beauftragte für Informationsfreiheit Meike Kamp klar: Die Änderungen sind zum Schutz kritischer Infrastruktur nicht notwendig & die Neuregelung wäre ein viel zu großer Eingriff in Transparenz & Informationsfreiheit. IFG-Anfragen könnten willkürlich abgelehnt werden.
Lest die ganze Stellungnahme & unseren offenen Brief: https://fragdenstaat.de/artikel/exklusiv/2026/03/berliner-cdu-will-auskunftsanspruche-einschranken/?pk_campaign=mastodon
You're paying AI companies a monthly subscription fee to be fingerprinted like a parolee.
I got bored and ran uBlock across Claude, ChatGPT, and Gemini simultaneously.
Claude:
ChatGPT:
Gemini:
When uBlock blocks Gemini's requests, the JS exceptions bubble up and Gemini dutifully tries to POST the error details back to Google. uBlock blocks that too. The error messages contain the internal codenames for every upsell popup that failed to load.
KETCHUP_DISCOVERY_CARD.
MUSTARD_DISCOVERY_CARD.
MAYO_DISCOVERY_CARD.
Google named their subscription upsell popups after condiments and I found out because their error handler snitched on them.
All three of these products cost money.
One of them is also running ad infrastructure.
Touch grass. Install @ublockorigin
DHS's Office of Industry Partnership was hacked by a group called "Department of Peace" and info about ICE contracts with over 6,000 companies is now published on @ddosecrets.org!
In fact, I am willing to offer up to five hours of my time — free — to any public sector team or nonprofit (with annual operating costs below USD 2M) anywhere in the world that needs help figuring out what makes sense and how to respond to top down pressure telling you to implement AI.
And if they’ve already chosen something for you, I am willing to help you figure out how to sand down the risks.
email me: adrianna (at) futureethics.ai
Edit: for public servants who technically can’t get ‘free’ things from a vendor, consider this one on one coaching / advice or a pre-sales call