I love Mastodon, but sometimes I miss algorithmic timelines. Not because I want to be fed slop, but because federation makes it genuinely hard to discover people outside your bubble.

So I built a tiny script that scrapes the public timeline once a day and hands me 10 posts it thinks I'll care about. Static keyword matching (Opinions about AI, trans stuff, ADHD, Go, cooking) + a pass through an LLM to filter out engagement bait and generic motivational garbage.

Now I get a little digest every morning. No auto-likes, no bot spam, just: "hey, here's some stuff you might actually want to read."

AI for content generation in social media? Hard pass. AI to help me find content I wouldn't have seen otherwise? That's the good stuff

@owlex something something .. need consent from people on local timelines to put their posts through this
@Li @owlex this is kind of a silly take. would it be okay if it was just a bunch of if statements with keyword matching? what makes a neural network any different?

@stag @owlex "a bunch of if statements" - you have just described every program in existence ever ...

.. doing based on keyword matching is a completely different thing and would be different because it is a different thing-

@Li @owlex ok? what makes a statistical model so different? its just some vectors no?

edit: fixed typo
@stag @owlex its not searching for and not displaying based on keywords.
@Li @owlex i mean, it kind of is in a way though. and even if you dont believe that, so what if its not purely based on a simple substring filter? what gives?
@stag @owlex the technical implementation details are not the problem it is a social issue .. again 'a bunch of if statements' describes nearly every program ever
@Li @owlex

you're moving the goalposts

you first said that consent must be given. then i said that if it was just a keyword match, you would have no issue, and you said that wasn't fair, because of the technical differences.

and now you're saying "the technical implementation details are not the problem"

also, separate point: by publishing your writing publicly (in whatever form) online, you implicitly agree that people will download and use that. this does not harm the authors, it's not like theres a model being trained on the posts.

@stag @owlex you said "what makes a bunch of if cases any different" .. you asserted that the difference was technical not me, i always asserted that it was a consent/social issue from the start, and said "its no different" was dumb because the reasoning applied to litterally any algorithm ever .. blocking based on keywords is a litteral function in mastodon and is about trying to prevent yourself being exposed to triggering content this serves a different purpose and is something i come here explicitly to avoid being done

and no posting online is not consent to throw my shit into an LLM thanks, stop telling me the terms of my own posts and what is and isnt harmful to me -- thanks.

@Li @owlex

i find it very hard to agree with your take

if you publish online it obviously doesnt mean consent for every use of the work. and if a model was being
trained on, this would be fair.

but this is not exploitive, or really harmful to the authors in any way at all. it is a personal reading aid, one could say. when you post publicly, you consent to being read, but you do not have the right to how a reader reads your words.

i presume you will not object to a screen reader, or a translator, or even just skimming past a post. if i can do that, why not a model? assuming the model is local, of course. a model is just another tool, and you above stated that it was not a technical issue. what makes the model different as a "social issue"?

do you believe that people should be forced to seek consent for running private tools to filter the content they receive?
@stag they won’t change their opinion. You’re wasting you’re energy. In my opinion they only see black and white there are no nuances. They could block non followers from processing their requests but they post publicly on the internet. Thanks for discussing this topic
@owlex yeah honestly lol its giving me "never argue with stupid people..." vibes im giving up atp
@stag i gave up. I even tried to do it in a really polite way. They didn’t accept me having a different opinion

@owlex @stag "having a different different opinion"

no its "i am throwing your posts into an LLM without consent. and no i will not stop"

that is not "polite" nor is it a "different opinion"

fuck you ^^

@Li @stag then please also tell me which apps I am allowed to use to view your posts. I was polite. You did not actually ask me what I am really doing with my automation. Thank you for this unproductive interaction
@owlex @stag but you are not "viewing" them you are "processing" them, which is the actual issue. >_>
@Li @owlex really, please tell me what is the difference between them? is there a point in this distinction?

if my viewer can hide posts with certain words, is that processing? if my viewer can translate posts, is that processing? if my viewer can generate a tts audio, is that processing?
@stag @owlex ? the translation is so you can read it, the tts audio is so you can read it, or well hear it, your displaying the same thing just as audio instead of as text, translation is maybe the only real issue here since it could say something different to what you actually wrote .. or said. but this isnt that -- stop trying to correlate these..
@Li @owlex and so what if i put it through a tool that tells me if it is relevant to me? if it works by non-neural based methods, is that okay? again, you claim its not a technical issue but it sure seems like thats not the case
@stag @owlex then it is not "reading the posts" it is doing something else with them, that they may not be okay with being used, in the same fucking way the CIA scanning all of the internet for anything that says "bomb" isnt actually fine or an advertiser doing it for "computer" to then show you ads for computers .. fucking
@Li @owlex okay, so if i make an addon to my viewer that checks if it contains words like "ai" and if it does, it adds a label saying i probably wouldnt like it, then this is not allowed by your standards?
@stag @owlex your conflating things .. again -- first its displaying things it thinks you "would" like, not hiding things you potentially wont, second "i dont want to see things with 'ai' in them" is also just a thing you could see from reading it, also content warnings are litterally made for this exact purpose .. i add those on myself .. so you can do that .. ??