I literally can't stop screaming in my pillow about people who heavily rely on hashtags being dubbed "spam" by overbearing algorithms/moderation that shadowban a group of people broadly and randos with a personal vendetta (regardless whether an algorithm is present on a given social network or not)
when certain users engage in actual deceptive behavior like *checks the trending feed* claiming to give away free laptops/appliances and then not bothering to be transparent enough to reveal that the offer is restricted to academics living in a specific region (unless users actually click on the link and investigate the fine print further)
I really don't see how this is any better than those who post get-rich-quick schemes in the comments to multiple unrelated posts...
#Mastodon #Fediverse #DeceptiveMarketing #DeceptiveAdvertising #EngagementFarming
#LastSeen
Contrary to some scammy pages (e.g. on F💩cebook) that distribute fake AI-generated photos for dubious reasons (like #EngagementFarming), these are REAL photos.
❝#LastSeen: Pictures of Nazi deportations is an international collaborative research project in which renowned institutions from the field of Holocaust research and education have joined forces to systematically compile, analyze, and digitally publish photographic images of Nazi deportations of Jews, Sinti and Roma, as well as victims of Nazi "euthanasia" programs.❞
Anti-ICE AI slop on Youtube
I just stumbled across a whole channel of anti-ICE AI slop, some of which is rather obvious:
https://youtube.com/shorts/7GJ0hF42Acg?si=W9EylUD84jbzgUTH
Worryingly there are videos which have been viewed 10m times:
https://www.youtube.com/shorts/LqsdT0UneY4
This has been viewed 6 million times and it’s one of the few things on the channel I’m not 100% certain is AI generated:
https://youtube.com/shorts/0AbCHxB8jx4?si=8zO9N6Jngh1raY3v
Is this just straight forward engagement farming of the most cynical sort? Or could this be politically motivated?

‘AI slop’ as a form of affect mining which transforms engagement farming
The downside of getting interested in AI slop is that my YouTube feed is now fucking full of it. Much like TikTok’s algorithm rapidly identifies categories I’m particularly responsive to (in my case cat videos and martial arts demonstrations) the YouTube algorithm identifies two categories I’m particularly susceptible to: motivational running videos and dogs bonding with humans. The former category is mostly human-generated content (of wildly varying quality) at present but the latter is almost entirely AI-slop at this stage. There’s a particular genre of videos where ‘adoption animals choose their humans’:
https://www.youtube.com/watch?v=ph8WqK0bLug
What I find so unsettling about this genre is that the clip compilations seem to combine real and AI-generated videos in equal measure. I originally assumed most of them were AI-generated because it seemed implausible that people would sit around in chairs while the dogs chose their humans. But this is indeed a real practice I discovered which illustrates the risks of using imperfect social knowledge to detect AI videos. There are some cases where the videos are obviously AI generated with red flags like blurred faces, jerky movements, implausible camera angles or inconsistent body language. But for the most part I find it hard to tell.
A commentator on the above video says “these feel staged” which suggests how, even the real films might have some theatricality about them. It’s striking how often the ‘chosen’ human is sitting on the front row or the aisle. But there are lots of these videos I think might be AI-generated but I’m far less certain than I am in stand alone videos rather than these hybrid clip shows. I’m sure there are some real videos in here alongside maybe 50-75% AI slop? I’m curious what ratio other people would be drawn towards after watching this closely.
If there are real videos in which a human is ‘chosen’ by a dog in this setting* it suggests something interesting about the political economy of AI slop. If there’s a genre of video which reliably elicits a significant audience response (in this case humans crying after their adoption animal chooses them) then we could see video models as providing a means to mine this affect: it helps the creator get to the core of the scenario without the contingent fluff inevitably involved in recording real events. Once it has been mined it can be synthesised ad infinitum until the affect has been exhausted and there’s no longer sufficient audience response to justify continued engagement farming in this area.
This suggests to me a radical intensification of engagement farming in which certain kinds of affective responses might come to be ‘used up’. Whereas cat videos became passe through over-exposure (raising the bar on what counts as cute, funny, engaging etc) it didn’t fundamentally lead to a loss of interest in cat videos. it just meant the category would be treated by many at a more cynical distance. In contrast I wonder if a form of depletion might actually be possible when it comes to affect mining? What could the downstream consequences of this be for society?
*Is it just me or is there something vaguely pentecostal about the whole scenario?

You can’t understand ‘AI slop’ without understanding engagement farming
This is a point which seemed so obvious to me I’m surprised to realise it does need to be spelled out. Rather than ‘AI slop’ being some exogenous factor which is now swamping previously functional social media platforms, we need to see it as an outcome of existing practices of engagement farming. The political economy of social platforms has over many year inculcated a strategic orientation towards engagement because of the direct monetary and indirect status rewards which come from maximising it. What it means in practice is using whatever techniques are available to maximise engagement with your content while minimising the cost. In essence it treats other people’s attention as a resource to be farmed, with the ‘farming’ being a matter of strategic action which makes it more likely their attention will be translated into engagement with specific content.
In practice this is almost painfully mundane. It’s a matter of tweaking the content and its framings in ways which are likely to increase engagement. When people say that the algorithm creates certain effects on platforms (e.g. increases the amount of emotive content) this is the missing step through which platform architectures bring about human action. It’s because strategic actors recognise the algorithm rewards certain things (or at least imagine they do, there’s loads of folk theory here) that they take create content intended to exploit that characteristic. There’s also directly preparing content in ways to appeal to individual actors without relying on the mediation of the algorithm. Indeed the most effective engagement farming involves speaking to both ‘audiences’ at the same time: producing content which directly grabs people and feels ‘authentic’ while also being optimised for algorithm distribution.
The flood of AI slop we now see on platforms reflects a shift in engagement farming practices. It’s now possible to do engagement farming effectively at scale because LLMs make content creation so easily. There’s also a disturbing lack of AI literacy sufficient to create attentional markets ripe for exploitation by AI-content which is startlingly obvious if you have any sense of what you’re looking for. The problem is the political economy of the social platform rather than the AI-content per se, even if in practice the two things run together. This matters because we can’t have a meaningful conversation about the problem of ‘AI slop’ without talking about how fundamentally broken social media platforms are.
#AISlop #algorithmicFolkelore #algorithms #engagementFarming #platformEconomics #politicalEconomy #visibility
PsyPost: Negativity drives engagement on political TikTok. “A new study published in Computers in Human Behavior suggests that political videos on TikTok that criticize opposing political parties and use emotionally charged or uncivil language tend to generate higher levels of engagement.”
https://rbfirehose.com/2025/10/18/psypost-negativity-drives-engagement-on-political-tiktok/