Die EU zwingt X erstmals unter dem Digital Services Act zu Änderungen – inklusive 120 Mio. € Strafe und Anpassungen beim „Verifizierungs“-System. Ein wichtiger Präzedenzfall.
Aber: Solange Plattformen Verifikation, Werbungstransparenz und Datenzugang erst nach Druck korrigieren, zeigt das auch, wie schwach Governance ohne Durchsetzung bleibt. Regulierung wirkt – aber nur, wenn sie tatsächlich angewendet wird.

https://www.sciencemediacenter.de/angebote/eu-verfahren-gegen-x-plattform-legt-vorschlag-fuer-nachbesserungen-vor-26049

#DSA #PlatformGovernance #DigitalPolicy

EU-Verfahren gegen X: Plattform legt Vorschlag für Nachbesserungen vor

X (ehemals Twitter) hat Nachbesserungen bei den blauen Verifizierungshäkchen vorgelegt. Das DSA-Verfahren gegen die Plattform gilt als Präzedenzfall. Forschende dazu:

Science Media Center Germany

🤣 Microsoft is going all-in with its automated moderation tools, blocking Discord messages just for saying "MicroSlop." The meme, popular among annoyed devs to poke fun at buggy or bloated software, is now being flagged by Microsoft’s AI as "hate speech."

Honestly, this is what happens when a company lets algorithms handle "cultural sensitivity" on autopilot. Sure, technically "slop" can be an insult, but the AI totally misses the inside joke that devs are making. Instead of listening to what people are actually saying about their products, Microsoft just turbocharged the meme, and now it’s guaranteed to go viral even faster. Classic Streisand Effect.

🧠 Automated moderation keeps tripping over inside jokes and creative digs.
⚡ The AI just can’t tell the difference between an honest rant and real harassment.
🎓 Now everyone’s getting clever with new euphemisms to dodge the ban.
🔍 This move accidentally brought gamers and devs together; everyone’s roasting Microsoft now.

https://www.fastcompany.com/91501766/microsoft-discord-microslop-banned-viral-phenomenon
#AIModeration #PlatformGovernance #TechCulture #Microsoft #MicroSlop #Censorship #Freedom #Softare #Discord

Microsoft banned this word from its Discord server. It's now a viral phenomenon—people are using it any way they can

Asking the internet to stop doing anything almost always backfires. Microsoft just learned that lesson the hard way.

Fast Company

"While the DSA has created an obligation for platforms to identify and mitigate systemic risks in Europe, the first two years of risk assessments rely heavily on high-level company descriptions of policies, tools, and user controls. Assessments provide extremely limited detail into whether any of these interventions meaningfully reduce harm, particularly for minors. By contrast, US litigation is surfacing previously unreleased internal platform data, experiments, and deliberations that reveal how platforms internally measure risk and define acceptable trade-offs related to risk, engagement, and revenue. But US litigation is largely reactive and limited to the facts of each specific case.

For example, internal company data released in US litigation shows that key safety mitigations – including screentime management tools, take a break reminders, parental controls, among others – suffer from extremely low adoption rates, often below 2% of minor users. Internal documents also suggest the design of these features may undermine effectiveness: TikTok leadership initially imposed “guardrail” metrics requiring that new screentime tools reduce usage by no more than 5%, while Meta’s internal projections accurately predicted that 99% of teens would not use optional opt-in take a break features.

The evidence emerging from DSA systemic risk assessments and US platform litigation underscores a central gap in current approaches to platform governance: risks are increasingly well-described, but mitigations are rarely communicated using rigorous, outcome-oriented data and evidence."

https://kgi.georgetown.edu/research-and-commentary/measuring-risk-what-eu-risk-assessments-and-us-litigation-reveal-about-meta-and-tiktok/

#SocialMedia #EU #USA #DSA #TikTok #Instagram #Algorithms #Meta #Facebook #PlatformGovernance #MentalHealth

Measuring Risk: What EU Risk Assessments and US Litigation Reveal About Meta and TikTok – Knight-Georgetown Institute

Knight-Georgetown Institute

Spain’s response to Telegram founder Pavel Durov’s mass message underscores a growing policy-security intersection.

Governments argue that platform scale and minimal moderation architectures can enable misuse, while platform leaders warn that expanded liability and age verification may weaken privacy, anonymity, and open discourse. Similar regulatory pressure is emerging across Europe and other regions.

For security professionals, the issue raises questions around governance, identity systems, moderation tooling, and compliance design.

How can platforms improve harm reduction without introducing systemic privacy risks?

Source: https://www.theguardian.com/world/2026/feb/05/spain-hits-back-at-pavel-durov-over-mass-telegram-post-on-social-media-ban-plan

Share insights and follow @technadu for grounded coverage at the intersection of security and policy.

#Infosec #PlatformGovernance #OnlineSafety #DigitalPolicy #TechNadu #PrivacyEngineering #CyberRisk

Global digital governance frameworks look neat on paper and break in practice.
I wrote a short reflection on child online protection, TikTok, and implementation gaps in global policy.
Read & subscribe if this is your lane: https://digitalserendipities.substack.com/p/child-online-protection-tiktok-and

#DigitalGovernance #PlatformGovernance #ChildOnlineProtection #Policy #TikTok #Substack

Child Online Protection, TikTok, and the Limits of Global Digital Governance

Behind the scenes of a new policy research paper

Digital Serendipities

It’s out! 🎉 My new paper is published.

I examine how Instagram users practice digital vigilantism to fight botting & porn bots — taking authenticity governance into their own hands. The study highlights user-driven surveillance and platform power asymmetries.

Part of the special issue “Digital Platforms and Agency” in Lateral (CSA), edited by Reed van Schenck & Elaine Venter.

Read it here:
https://csalateral.org/section/digital-platforms-agency/call-the-bot-police-user-led-platform-governance-of-inauthenticity-on-instagram/

#PlatformGovernance #DigitalCulture #InternetStudies #culturalstudies

I’ve published a pre-print on Zenodo arguing that large digital platforms now function as infrastructural religions — governing visibility, legitimacy, belonging, and exclusion while claiming neutrality.

The paper introduces concepts including platform orthodoxy, algorithmic nationalism, and geo-digital sovereignty to examine how authority has migrated into infrastructure and code.

https://zenodo.org/records/18406146

#PlatformGovernance

Platforms as Infrastructural Religions: Authority, Visibility, and the Migration of Social Control into Code

This pre-print reframes large-scale digital platforms as systems of social authority rather than neutral communication tools, arguing that they now perform functions historically associated with organized religion by governing visibility, legitimacy, and participation. It introduces the concepts of platform orthodoxy, algorithmic nationalism, and geo-digital sovereignty to explain how ideological dominance, exclusion, and jurisdiction are produced through infrastructure and algorithmic enforcement rather than belief or doctrine. The paper offers a conceptual framework for analyzing platform power, AI-mediated governance, and the migration of authority into code.

Zenodo

Our recent blog examines how these measures operated in practice, looking at the legal framework under the IT Act, the role of platform geo-blocking, and the use of executive advisories and criminal law during crisis situations.

Read here: https://sflc.in/content-blocking-and-censorship-during-pahalgam-attack/ #ContentBlocking #DigitalGovernance #InternetRegulation #Section69A #PlatformGovernance #FreedomOfExpression #MediaFreedom 13m

An Analysis of Content Blocking and Censorship during Pahalgam Attack • Software Freedom Law Center, India

The terrorist attack in Pahalgam on April 22nd, 2025 sent shockwaves across India, killing 26 civilians in one of the deadliest assaults on tourists in Kashmir since 2008.[1] What followed was not just a military response Operation Sindoor but an unprecedented digital crackdown that swept up journalists, media outlets, Pakistani officials, celebrities, and even athletes The terrorist attack in Pahalgam on April 22nd, 2025 sent shockwaves across India, killing 26 civilians in one of the deadliest assaults on tourists in Kashmir since 2008.[1] What followed was not just a military response Operation Sindoor but an unprecedented digital crackdown that swept up journalists, media outlets, Pakistani officials, celebrities, and even athletes

Software Freedom Law Center, India • Defender of Your Digital Freedom

YouTube has expanded its parental control framework, introducing new tools that allow parents to limit or fully block Shorts access for children and teens.

Key additions include:
• Shorts-specific time limits
• Bedtime and break reminders
• Improved account switching for supervised profiles
• Continued use of age-estimation technology

While primarily a safety and well-being update, these changes also reflect how platforms are responding to growing regulatory and societal pressure around youth online exposure.

How effective do you think technical controls are compared to policy, education, and oversight?

Share your view and follow @technadu for clear, unbiased tech reporting.

#OnlineSafety #PlatformGovernance #DigitalWellbeing #ParentalControls #TechPolicy #YouthSafety #YouTube