so Fedi is an echo chamber, sez a recent unnamed toot. I see that too.

But then they go on to say we should stop bullying fucking assholes (people with "other" viewpoints, like AI shills) and letting them play in our jungle gym.

The toot has already got tons of replies, but I just wanted to say:

No. Fuck you. This community is chock full of people from marginalized communities, and any push to get us to "open up" to people with "different viewpoints" that include fucking fascist autocorrect bots that are shredding the net to devour it is just fucking stupid.

I don't give a shit if Fedi grows. I rather like being a place where people from famous artists like Micheal Whelen, to community instigators like VantaBlack and PhoenixSerenity feel comfortable interacting with everyone directly.

The people most in favor of this fucking nonsense have a HUGE idealogical overlap with those who burn crosses, so to speak.

If that means we stay small? Good. Fuck outta here. If we disagree with you to the point we get savage at you, YOU DON'T DESERVE TO BE HERE

@TeflonTrout I like Machine Learning, Deep Learning, Reinforcement Learning, I like that I can interact with people that also research that and post their findings on the fediverse and don't have to go to X to do so. Idk why I don't deserve to be here, but hey, I respect your opinion

@budududuroiu

are you using LLMs and pretending they are gen AI?

If not, LLMs are badass tools in their own right, and if you work with those that is badass.

Are you using one of the big bastard's slop dispensers, like Claude, et al? Then I really don't care if I never see you again.

@TeflonTrout I don't use US models, and if I did, it would only be to distil their outputs. My business helps people set up open weight models on whatever GPUs they rent (whether from hyperscalers or their own), and do distributed inference. The market was much better before big bastard labs got HIPAA and other certifications, but I can't complain, it's a fun challenge still. I'm old, so my background is in training Generative-Adversarial Networks to do unsupervised anomaly detection (think anomalous radiographies).

Besides that, I use LLMs for what they're good at: fuzzing. I won't name names, for fear of litigious actors, but LLMs are great at reverse-engineering proprietary blobs that get in my way, LLMs are a great 'fitness' evaluator for evolving algorithms (instead of doing exhaustive search, you use an LLM to guide param tuning).

My belief is that LLMs are here to stay, and the only way forward for us laypeople to retain some semblance of power isn't rejecting LLM use, but making them so commoditised that large-scale datacentres just become economically intractable. The OAI-Nvidia-Oracle-CoreWeave circlejerk investment is the proof that capitalism has transcended labour and "voting with your dollars" is powerless as direct action.

@budududuroiu

Now THERE is a nuanced and useful pov, if I do say so myself, because I 100% agree. It isn't LLMs specifically we hate, its the ones the dicks in US hype land have overstuffed and are pretending they are AI.

Those, and the artless slop generators are what we hate. But using LLMs to pore over huge datasets looking for things experts like yourself have trained them to find? That is the Good Stuff.

I agree that there's no stopping the bad stuff completely (just like we still get spam emails), but I sure as shit don't want anything to do with it