Martin Degeling

361 Followers
511 Following
480 Posts
Freelancer working with AI Forensics and Institute for Strategic Dialogue. Platform auditing, TikTok, Privacy
private websitehttps://martin.degeling.com
pronounshe/him
Ok, replication shows it's not clear what's happening there.. *Some* uploads are automatically labeled. But when you download the videos from #TikTok again, they do not contain any C2PA metadata anymore.

The European Comission "accepts TikTok's commitments on advertising transparency" thttps://ec.europa.eu/commission/presscorner/detail/en/ip_25_2940

The issue they commit to fix now, are mostly what we already criticized more then 1.5 years ago. Let's hope the fixes come fast.

Here @alhohlfeld had already summarized the problems in Feb 2024.
https://tiktok-audit.com/blog/2024/Tik-Tok-oclock/

In their latest risk assessment report (https://www.tiktok.com/transparency/en/dsa-transparency/) #TikTok claims that AI content with the #C2PA info that is uploaded is automatically labeled. I tested this yesterday with one video and it seems this is not true.

I download a video from Sora2 - which contains the C2PA meta data and the visible watermarks - and uploaded it to TikTok: No label is shown (after 24h).

Even worse: During the processing on TikToks side the C2PA metadata is removed.

We also spent some time trying to understand how these accounts monetize. TikTok is not very transparent about the monetization on the video level, but when simply looking at the minimal thresholds only 13% of the videos where even eligible for monetization.
I have the gut feeling the majority of Agentic AI Accounts would not exist if there wasn't the AI industry offering an endless number of free-for-now-tier plans given the actual costs of generating videos.
Comments below AI Slop:
I was quite surprised that even under the most fake-news-like entries the AI-aspect was rarely discussed. In times when even real news are often criticized as fake only 5% mention AI
[ caveat: we only parsed the top 50 comments and replies]

The Guardian and Spiegel covered our main results focussing on the prevlance of
* Anti-migrant fake news https://www.theguardian.com/technology/2025/dec/03/anti-immigrant-material-among-ai-generated-content-getting-billions-of-views-on-tiktok
* Sexualized AI-generated images of women https://www.spiegel.de/netzwelt/apps/ki-konten-fluten-tiktok-mit-fakebildern-von-kindlich-aussehenden-frauen-a-fae7c703-b708-49a4-85fd-bd4382396ca2

Here are two additional observations I'm still thinking about:

Anti-immigrant material among AI-generated content getting billions of views on TikTok

Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month

The Guardian
We just published our follow-up report on AI-generated "slop" on #tiktok. After testing #ai-slop on Instagram & TikTok this summer, showing that TikTok surfaces more AI content in the search results, we
Monitored 354 accounts for 4 weeks
that amassed 4.5 billion views. Still only half of it is labeled as AI-generated.
1/ A longtime Wired editor just wrote a mush-brained essay about how he totally missed the political rot of Silicon Valley (& still doesn't get it). But in the late 1990s, a Wired journalist warned of a toxic ideology bubbling up from tech. Paulina Borsook has largely been erased. Let's change that
In the end, each video took around 5 minutes to process. Does not scale to millions very well, but I still prefer it over spending money on some data center.
The M2 is also pretty energy efficient compared to separate GPUs (maxed at 50-60 W under full load). And at the time (it's only been 2 month) I also couldn't find a ollama hosting service that would provide the same quality in structured JSON output that the local ollama could provide.
We used the music2emo (https://huggingface.co/spaces/amaai-lab/music2emo) model to get a textual description of the mood of the underlying sound and combined screenshots in collages to reduce the input to the vision-capable qwen model.
Music2emo - a Hugging Face Space by amaai-lab

Upload an audio file to analyze its emotional characteristics. The model predicts mood tags, valence (positivity), and arousal (intensity) scores.