Wind und Wurzeln - mit Marina Weisband: #SocialMedia: Wie wir gleichzeitig unsere #Kinder und unsere #Demokratie retten

Social Media gilt als Gefahr für Heranwachsende – gleichzeitig finden viele dort eine Gemeinschaft. Wie schädlich sind #TikTok & Co wirklich, was richten #Algorithmen an, und welche Lösungen gibt es jenseits pauschaler Verbote?
TikTok, #Instagram, #Snapchat #Facebook, #Meta, & Co: Während #Australien es bereits durchgezogen hat und Spanien, Frankreich und auch deutsche Parteien über Social-Media-Verbote für unter 14- oder 16-Jährige diskutieren, ist für viele junge Menschen das Netz ein Ort, an dem sie sich gesehen und sicher fühlen und die wichtigste Quelle für politische Information.

Webseite der Episode: https://wind-und-wurzeln.podigee.io/10-neue-episode

Mediendatei: https://audio.podigee-cdn.net/2418308-m-6147cf9ca0e8ddb04170ddd200ce3bbb.mp3?source=feed

@hakendran
@TheMorpheus
@afelia
@PerspectiveDaily

Social Media: Wie wir gleichzeitig unsere Kinder und unsere Demokratie retten

TikTok, Instagram, Snapchat & Co: Während Australien es bereits durchgezogen hat und Spanien, Frankreich und auch deutsche Parteien über Social-Media-Verbote für unter 14- oder 16-Jährige diskutieren, ist für viele junge Menschen das Netz ein Ort, an dem sie sich gesehen und sicher fühlen und die wichtigste Quelle für politische Information. Wir sprechen darüber, warum der Vergleich von Social Media mit Rauchen und Alkohol hinkt – und weshalb Social Media für viele Jugendliche Bildung, politisches Bewusstsein, Freundschaften und Zugehörigkeit ermöglicht. Eine Studie der Bertelsmann-Stiftung zeigt: 74 Prozent der jungen Menschen in Deutschland informieren sich politisch über Social Media – mehr als über Schule, Familie oder Freund:innen.Marina, die selbst ein "Internetkind" ist, spricht über reale Gefahren und echte Lösungen.

Wind und Wurzeln - mit Marina Weisband
AV1’s open, royalty-free promise in question as Dolby sues Snapchat over codec https://arstechni.ca/UknZ #snapchat #lawsuit #Policy #dolby #Tech #av1
AV1’s open, royalty-free promise in question as Dolby sues Snapchat over codec

Big Tech declaring AV1 royalty-free “doesn't mean that it is."

Ars Technica

Two verdicts in two days: How American courts are rewriting the rules for Big Tech and children

Judge Bryan Biedscheid of New Mexico could order significant changes to how Instagram and Facebook operate. Nathan Burton/Santa Fe New Mexican via AP, Pool

Carolina Rossini, UMass Amherst

Within 48 hours, the legal landscape governing social media and children shifted in ways that will take years to fully understand and verify.

On March 24, 2026, a Santa Fe jury ordered Meta to pay US$375 million for violating New Mexico’s consumer protection laws. The next day, a Los Angeles jury found Meta and Google’s YouTube negligent in the design of their platforms, awarding almost $6 million in damages to a single plaintiff.

The dollar figures are drawing headlines, but a $375 million penalty against a company worth $1.5 trillion is a rounding error. The award is less than 2% of Meta’s $22.8 billion net income in 2025. Meta’s stock rose 5% on the day of the New Mexico verdict, indicating how the market assessed the effect of the penalty on the company.

Fines without structural change are more akin to licensing fees than accountability. As a technology policy and law scholar, I believe the question of whether these verdicts will produce real changes to the products that millions of children use every day is more consequential than the jury awards.

The answer is not yet, and not automatically. A financial penalty does not rewrite a single line of code, remove an algorithm or place a safety engineer in a role that was eliminated to protect a quarterly earnings report. Meta and Google have signaled they will appeal, with First Amendment challenges to the product-design theory the likely central battleground.

The companies’ lawyers are likely to argue, with some justification, that the science linking the design of platforms to mental health harm remains contested, and that the companies have already implemented safety measures. In the meantime, Instagram, Facebook anf YouTube will continue to operate exactly as they did before the verdicts.

https://youtu.be/cOXlSjvdvsw?si=II_g3oGo78CIOfCi

The verdicts against Meta pave the way for hundreds or even thousands of similar cases.

Consumer protection

Most coverage framing the New Mexico verdict casts it as a child safety case. It is that, but it also presents a more technically significant dimension: a consumer protection claim grounded in allegations of corporate deception. New Mexico Attorney General Raúl Torrez did not sue Meta for what users posted, but instead sued Meta for its false statements about its own platform safety, employing a novel legal approach.

For three decades, Section 230 of the Communications Decency Act has shielded internet platforms from liability for content generated by their users. Courts have interpreted Section 230 immunity broadly, and many earlier attempts to hold platforms accountable for child harm have foundered on it.

The New Mexico complaint, filed in December 2023, was drafted with explicit awareness of this obstacle. It asked a single question: Did Meta knowingly lie to New Mexico consumers about the safety of its products?

The jury’s answer was yes, on all counts, and its verdict rested on three distinct legal theories under New Mexico’s Unfair Practices Act.

The first was straightforward deception: Meta’s public statements, ranging from CEO Mark Zuckerberg’s congressional testimony claiming research about the platform’s addictiveness was inconclusive to parental guidance materials that omitted known risks of grooming and sexual exploitation, qualify as representations made in connection with a commercial transaction.

Users pay for Meta’s platforms not with money but with their data, which Meta then converts into advertising revenue. New Mexico successfully argued that this data-for-services exchange constitutes commerce under the state’s consumer protection statute, and that misrepresentations made within it are actionable regardless of Section 230.

The second theory was unfair practice, or conduct offensive to public policy, even if not technically deceptive. Here, the evidence centered on what Meta’s own engineers and executives knew and then ignored.

Internal documents showed repeated warnings. These alarm bells centered around child sexual abuse material proliferating on the platforms, about algorithms that amplified harmful content because it generated engagement, and about age verification systems that were essentially cosmetic. The company overrode those warnings for commercial reasons.

The jury was shown a specific sequence: Meta executives requested staffing to address platform harms, Zuckerberg declined, and the company continued to publicly represent its safety efforts as adequate.

The third theory was unconscionability: taking advantage of consumers who lacked the capacity to protect themselves. Children are the clearest possible case. Children cannot evaluate terms of service, cannot negotiate platform architecture, and cannot assess the neurological implications of engagement-maximizing design. Meta had comprehensive internal research documenting these vulnerabilities and chose to ignore rather than mitigate them.

Bellwether on addictiveness

The Los Angeles case, which concluded on March 25, tested a different theory. It was a personal injury trial rather than a government enforcement action.

The plaintiff, identified in court as KGM, is a 20-year-old woman who began using YouTube at age 6 and Instagram at age 9. Her lawyers argued that the platforms’ deliberate design choices such as infinite scroll, autoplay video and engagement-based recommendation algorithms were the causes of her addiction, depression and self-harm.

The jury found both Meta and YouTube negligent in the design of their platforms and found that each company’s negligence was a substantial factor in causing harm to KGM. Meta bears 70% of the liability; YouTube 30%. The individual $3 million compensatory award is modest. The punitive damages phase, still to come, will be calculated against each company’s net worth and is likely to produce a very different number.

An attorney for, and family members of, child victims of social media harms react to the verdict in a lawsuit in Los Angeles on March 25, 2026. Frederic J. Brown/AFP via Getty Images

Beyond the general precedent, this case matters because it is a bellwether. It was selected from a consolidated group of hundreds of similar lawsuits to test whether a product-design theory of liability could survive a jury trial, and it did. That finding has immediate and concrete implications: Each of those plaintiffs now litigates on a stronger footing, and if the damages awarded to KGM are even partially scaled across similar cases, the total financial exposure for Meta and YouTube moves from hundreds of millions to billions of dollars.

More importantly, the bellwether verdict signals to every other plaintiff, attorney and state attorney general that this legal pathway is viable, and to every platform that the courtroom is no longer a safe harbor. The legal strategy established that negligence claims against platform design are viable in California courts.

Public nuisance

Beginning May 4, 2026, Judge Bryan Biedscheid in the New Mexico case is scheduled to hear the public nuisance count without a jury in a bench trial. Public nuisance is a legal doctrine traditionally used to address conditions that harm the general public. This doctrine has been used in concern over contaminated water, lead paint in housing stock and opioid distribution networks.

New Mexico is arguing that Meta’s platform architecture constitutes exactly such a condition. If the judge agrees, the remedy is not a fine. Instead, it is an abatement: a court order requiring Meta to eliminate the harmful condition.

Attorney General Torrez has already been explicit about what he will ask for: real age verification, not a checkbox asking users to confirm they are old enough; algorithm changes; and an independent monitor with authority to oversee compliance. These are structural demands on how the platform operates.

This is where drawing a parallel with Big Tobacco is apt. The tobacco litigation of the 1990s ultimately produced not just financial settlements but the Master Settlement Agreement, which imposed permanent restrictions on marketing practices and funded public health programs for decades. The public nuisance theory in the New Mexico case is designed to produce an analogous structural outcome for social media.

Precedent for tidal wave of cases

The significant effects of two verdicts are about evidence and precedent. For the first time, a jury has examined Meta’s internal documents – emails from engineers warning about self-harm, the rejected safety proposals and Zuckerberg’s personal decisions to prioritize engagement over protection – and returned a verdict that those documents mean precisely what they appear to say.

That finding, and the legal theories that produced it, is now part of the foundation on which 40-plus pending state attorney general cases, thousands of individual lawsuits and a federal trial later this year are likely to be built.

The abatement phase, beginning May 4, may prove more consequential than the dollar amounts. If the judge in the New Mexico case – or any judge in a subsequent case – orders real age verification, algorithm changes and an independent monitor, that would be a true structural change.

Carolina Rossini, Professor of Practice and Director for Program, Public Interest Technology Initiative, UMass Amherst

This article is republished from The Conversation under a Creative Commons license. Read the original article.

#facebook #instagram #snapchat #socialMedia #tiktok #youtube
Newsroom

FYI: Snap's AI Clips let developers monetize photo-to-video lenses on Snapchat: Snap today launches AI Clips in Lens Studio, a closed-prompt format turning photos into 5-second videos for Lens+ subscribers, with direct developer payouts via Lens+ Payouts. https://ppc.land/snaps-ai-clips-let-developers-monetize-photo-to-video-lenses-on-snapchat/ #Snapchat #AIClips #LensStudio #PhotoToVideo #VideoMarketing
Snap's AI Clips let developers monetize photo-to-video lenses on Snapchat

Snap today launches AI Clips in Lens Studio, a closed-prompt format turning photos into 5-second videos for Lens+ subscribers, with direct developer payouts via Lens+ Payouts.

PPC Land
Schutz von Kindern im Internet: EU-Kommission eröffnet Verfahren gegen Snapchat

Die Europäische Kommission hat ein förmliches Verfahren eingeleitet, um zu untersuchen, ob Snapchat das Gesetz über digitale Dienste (DSA) in Bezug auf den Schutz von Kindern einhält.

Vertretung in Deutschland

Die EU-Kommission hat ein förmliches Verfahren gegen Snapchat einleitet und prüft damit, ob Snapchat den DSA einhält. Die Kommission vermutet, dass Snapchat Kinder nicht vor schädlichen Inhalten, Cybergrooming und der "Rekrutierung für kriminelle Aktivitäten" schützt.

Laut KIM-Studie 2024 nutzt jedes zehnte Kind zwischen 6-13 Jahren Snapchat jeden oder fast jeden Tag. Die Nutzung nimmt mit steigendem Alter zu: immerhin 7% der 6-7-Jährigen nutzt Snapchat mindestens 1x/Woche, bei den 10-11-Jährigen sind es schon 22 %. Funfact: laut AGB ist die Nutzung in der EU ab 13 Jahren erlaubt und erfordert zudem die Erlaubis der Eltern.

https://ec.europa.eu/commission/presscorner/detail/de/ip_26_723

#FediEltern #FediLZ #Medienbildung #SocialMedia #Snapchat #Kinder #DSA #Cybergrooming #Funfact

I haven’t watched streamers in forever but what the heck is this, it looks like #Emiru is wearing a #Snapchat filter as a mask.

Women wear makeup this heavy now? Or is this just a streamer thing?

https://www.youtube.com/watch?v=oFTBf-wnVe0

#Twitch

EU-#Kommission leitet Verfahren gegen #Snapchat ein

Bei vier Webseiten mit pornografischen Inhalten gab es bei einer ersten Prüfung Anzeichen für eine mangelhafte Alterskontrolle.

https://t.ress.at/uApXT/

Age issues: Commission demands improvements from Snapchat and porn platforms

The EU Commission announced today that it is demanding better age verification from major adult platforms and Snapchat.

https://www.heise.de/en/news/Age-issues-Commission-demands-improvements-from-Snapchat-and-porn-platforms-11225664.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&utm_source=mastodon

#DigitalServicesAct #EU #Jugendschutz #Netzpolitik #Snapchat #Twitter #news

Age issues: Commission demands improvements from Snapchat and porn platforms

The EU Commission announced today that it is demanding better age verification from major adult platforms and Snapchat.

heise online