‘It started with a tipoff’: how a Guardian investigation exposed child sex trafficking on Facebook and Instagram
#OnlineSafety #PlatformGovernance #DigitalRights #TechPolicy #Meta
‘It started with a tipoff’: how a Guardian investigation exposed child sex trafficking on Facebook and Instagram
#OnlineSafety #PlatformGovernance #DigitalRights #TechPolicy #Meta
Die EU zwingt X erstmals unter dem Digital Services Act zu Änderungen – inklusive 120 Mio. € Strafe und Anpassungen beim „Verifizierungs“-System. Ein wichtiger Präzedenzfall.
Aber: Solange Plattformen Verifikation, Werbungstransparenz und Datenzugang erst nach Druck korrigieren, zeigt das auch, wie schwach Governance ohne Durchsetzung bleibt. Regulierung wirkt – aber nur, wenn sie tatsächlich angewendet wird.
🤣 Microsoft is going all-in with its automated moderation tools, blocking Discord messages just for saying "MicroSlop." The meme, popular among annoyed devs to poke fun at buggy or bloated software, is now being flagged by Microsoft’s AI as "hate speech."
Honestly, this is what happens when a company lets algorithms handle "cultural sensitivity" on autopilot. Sure, technically "slop" can be an insult, but the AI totally misses the inside joke that devs are making. Instead of listening to what people are actually saying about their products, Microsoft just turbocharged the meme, and now it’s guaranteed to go viral even faster. Classic Streisand Effect.
🧠 Automated moderation keeps tripping over inside jokes and creative digs.
⚡ The AI just can’t tell the difference between an honest rant and real harassment.
🎓 Now everyone’s getting clever with new euphemisms to dodge the ban.
🔍 This move accidentally brought gamers and devs together; everyone’s roasting Microsoft now.
https://www.fastcompany.com/91501766/microsoft-discord-microslop-banned-viral-phenomenon
#AIModeration #PlatformGovernance #TechCulture #Microsoft #MicroSlop #Censorship #Freedom #Softare #Discord
"While the DSA has created an obligation for platforms to identify and mitigate systemic risks in Europe, the first two years of risk assessments rely heavily on high-level company descriptions of policies, tools, and user controls. Assessments provide extremely limited detail into whether any of these interventions meaningfully reduce harm, particularly for minors. By contrast, US litigation is surfacing previously unreleased internal platform data, experiments, and deliberations that reveal how platforms internally measure risk and define acceptable trade-offs related to risk, engagement, and revenue. But US litigation is largely reactive and limited to the facts of each specific case.
For example, internal company data released in US litigation shows that key safety mitigations – including screentime management tools, take a break reminders, parental controls, among others – suffer from extremely low adoption rates, often below 2% of minor users. Internal documents also suggest the design of these features may undermine effectiveness: TikTok leadership initially imposed “guardrail” metrics requiring that new screentime tools reduce usage by no more than 5%, while Meta’s internal projections accurately predicted that 99% of teens would not use optional opt-in take a break features.
The evidence emerging from DSA systemic risk assessments and US platform litigation underscores a central gap in current approaches to platform governance: risks are increasingly well-described, but mitigations are rarely communicated using rigorous, outcome-oriented data and evidence."
#SocialMedia #EU #USA #DSA #TikTok #Instagram #Algorithms #Meta #Facebook #PlatformGovernance #MentalHealth
Spain’s response to Telegram founder Pavel Durov’s mass message underscores a growing policy-security intersection.
Governments argue that platform scale and minimal moderation architectures can enable misuse, while platform leaders warn that expanded liability and age verification may weaken privacy, anonymity, and open discourse. Similar regulatory pressure is emerging across Europe and other regions.
For security professionals, the issue raises questions around governance, identity systems, moderation tooling, and compliance design.
How can platforms improve harm reduction without introducing systemic privacy risks?
Share insights and follow @technadu for grounded coverage at the intersection of security and policy.
#Infosec #PlatformGovernance #OnlineSafety #DigitalPolicy #TechNadu #PrivacyEngineering #CyberRisk
Global digital governance frameworks look neat on paper and break in practice.
I wrote a short reflection on child online protection, TikTok, and implementation gaps in global policy.
Read & subscribe if this is your lane: https://digitalserendipities.substack.com/p/child-online-protection-tiktok-and
#DigitalGovernance #PlatformGovernance #ChildOnlineProtection #Policy #TikTok #Substack
It’s out! 🎉 My new paper is published.
I examine how Instagram users practice digital vigilantism to fight botting & porn bots — taking authenticity governance into their own hands. The study highlights user-driven surveillance and platform power asymmetries.
Part of the special issue “Digital Platforms and Agency” in Lateral (CSA), edited by Reed van Schenck & Elaine Venter.
#PlatformGovernance #DigitalCulture #InternetStudies #culturalstudies
I’ve published a pre-print on Zenodo arguing that large digital platforms now function as infrastructural religions — governing visibility, legitimacy, belonging, and exclusion while claiming neutrality.
The paper introduces concepts including platform orthodoxy, algorithmic nationalism, and geo-digital sovereignty to examine how authority has migrated into infrastructure and code.
This pre-print reframes large-scale digital platforms as systems of social authority rather than neutral communication tools, arguing that they now perform functions historically associated with organized religion by governing visibility, legitimacy, and participation. It introduces the concepts of platform orthodoxy, algorithmic nationalism, and geo-digital sovereignty to explain how ideological dominance, exclusion, and jurisdiction are produced through infrastructure and algorithmic enforcement rather than belief or doctrine. The paper offers a conceptual framework for analyzing platform power, AI-mediated governance, and the migration of authority into code.