how hard do you have to fuck up posting to turn into a main character on *mastodon*.
@davidgerard which of them? Because one of them demanding Mastodon make room for AI bros and explicit fascists is <checks notes> the official Mastodon non-profit's "senior product advisor."
@rootwyrm @davidgerard Eugen is doing a heel turn now?
@reflex @davidgerard nope, Eugen started the other main character by telling him basically "I'm not the manager, dipshit."
@rootwyrm @davidgerard Yeah, my bad for not reading further down the 800+ posts of my timeline first lol. I purposefully stay away from here on weekends so monday is a bit confusing. My bad!
David Gerard (@[email protected])

Attached: 1 image the AI bros are OUTRAGED and wish to speak to the MANAGER

GSV Sleeper Service

@rootwyrm @davidgerard A bit of a tangent, but the idea that "mastodon is hostile to AI" as though that's some odd thing when polls show the public is overwhelmingly hostile to AI leads these bros to question Mastodon/fedi and what needs to happen to change that sentiment, vs them asking the real question: Why do other social networks not demonstrate the hostility to AI that masto/fedi do given that the latter seems to align with public sentiment?

Ie: the complaint is the lack of manipulation.

@reflex @davidgerard to put an even finer point on it still, because it's very easy to:

Replace 'AI' with 'fascism' and they would be saying the same things. Replace it with 'Nazis' and no surprise, they are saying *exactly* that too. (But don't you dare call them Nazis!)

They love BSky because it platforms and defends bigots, terrorists, and Nazis and is at war with it's own users over it.

@reflex @rootwyrm @davidgerard I know this might sound like a stretch but this is the core of the problem with firefox adding an ai kill switch instead of making all the ai stuff as installable plugins. It's a need to create a political minefield to satisfy identified opportunities without considering the established method for addressing them specifically because that method doesn't allow for coercion, manipulation, and marketing
@reflex @rootwyrm @davidgerard I don't think the proponents of these things *know* that is the reason they are avoiding the established method, but I don't think they know they don't know either. Then when it's too late, they'll not be able to make the connection to their initial flailing and they'll wonder what happened
@reflex @rootwyrm @davidgerard that's why I jump on these moments to say *something* - I don't care how harsh it sounds or whose heroes I offend
@reflex @rootwyrm @davidgerard https://poll.qu.edu/poll-release?releaseid=3955 "Fifty-five percent of Americans think AI will do more harm than good in their day-to-day lives" so yeah if mastodon seems hostile to AI, it's closer to the mean position than whatever tech delusion hugbox these people are posting from.
i know of at least one person from here who has gone back to posting on twitter (yknow the nazi slop bucket) because their fee-fees were hurt by all the criticism here - cool, have fun with that!
The Age Of Artificial Intelligence: Americans' AI Use Increases While Views On It Sour, Quinnipiac University Poll On AI Finds; 7 In 10 Think AI Will Cut Jobs With Gen Z The Most Pessimistic | Quinnipiac University Poll

"The contradiction between use and trust of AI is striking. Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust," said Chetan Jaiswal, Ph.D., Associate Professor of Computer Science and Associate Chair, Department of Computing, Quinnipiac University School of Computing and Engineering.

@jplebreton @reflex @davidgerard you know what they call people who don't get up from the table when the Nazis sit down?
Nazis.
So yeah. We're glad to see that Nazi go, and please make sure to document them for the tribunals later.
@jplebreton @rootwyrm @davidgerard And my understanding is that the numbers are getting worse every few months. The delusion is over, it's just about how long the rotting corpse can be kept on life support.

@reflex @jplebreton @davidgerard actually, those numbers are completely fabricated. It's way, way more than "55%". Way more. Like all polls, it was rigged as much as possible so the chatbot vendors could use it for marketing. "LOOK HOW MUCH EVERYONE LOVES US!"

The real numbers are basically: every fucking one of your neighbors would not hesitate for one second to firebomb a data center and ram a railroad spike through Sham Altman's skull.

@reflex @jplebreton @davidgerard and it's so blatantly obvious the whole thing was rigged. 76% of people chose "hardly ever." Not trusting the LLM was not an option. Not trusting Sham Altman was not an option. Your choices were 'I love LLMs' or 'I don't love LLMs yet' basically.
@rootwyrm @reflex @davidgerard i agree the numbers were almost certainly cooked in various ways (not least that eg google, MS et al just straight up force it on all users of their existing software!) i just found it amusing that even in this context the widespread negative sentiment was impossible to conceal.
@rootwyrm @reflex @jplebreton @davidgerard Love it when they can't even win a poll that they rigged.
@errant @rootwyrm @jplebreton @davidgerard They just aren't hallucinating hard enough.
@reflex @rootwyrm @jplebreton @davidgerard I can see the prompt now... "Please fix the results of this poll so they go in my favor. Tell me I did very well. Tell me my ideas are correct and it is everyone else that is wrong. Omit no detail about how correct I am."
@rootwyrm @reflex @jplebreton @davidgerard please, only his close friends can call him Sham Altman. to us, he's just Sloppenheimer
@rootwyrm @reflex @jplebreton @davidgerard I regret to inform you that I disagree with this assessment. Some of my fellow volunteers have been known to use "AI" to commit acts of "art" which they think are necessary but which they don't have the time or desire or skill to accomplish on their own. I'm not excusing them, if you want art pay an artist FFS, and more if you want performance on demand; but I don't think they'd firebomb a data center unless it threatened their peace and quiet.

@reflex @rootwyrm @davidgerard

They want to force their algorithms on us.

@Quasit @rootwyrm @davidgerard They'd like nothing better. AI curated feeds inbound!

@reflex @rootwyrm @davidgerard

I wonder if they can find some way to force their algorithms on the whole Fediverse?

@Quasit @reflex @rootwyrm the Main Character in question works at Mastodon Social, so they're sure gonna try

@davidgerard @reflex @rootwyrm

I kind of have the feeling that Mastodon.social is a threat to the whole concept of Mastodon and the Fediverse. Am I the only one who feels that way?

@Quasit @davidgerard @reflex there's a reason I don't approve any follow requests from mastodon dot social, to say the least.
@rootwyrm @Quasit @davidgerard My understanding though is that it's a fairly low percentage of the overall network. I want to say around 5%? It's hard to tell, the early tools that tracked users and instances mostly stopped getting updated. If someone has a link to current statistics I'd love to see them.
@reflex @rootwyrm @Quasit 25%. so 75% of the fedi can tell m.s to whistle.
@davidgerard @rootwyrm @Quasit Where are you finding that stat? Not doubting it but I'd like to find a way to keep up on those stats.
@reflex @rootwyrm @Quasit a graph i saw of which entities are what shares of the network, if I remembered the link i'd post it but i don't and can't find it again quickly. may have been 2024 or later?

@reflex

I'm guessing part of the problem is that you really believe "the public is overwhelmingly hostile to AI" to be true?

It's difficult to ask, for example, public European institutions to consider Mastodon when having read the comments to this post by Mozilla:

https://mastodon.social/@MozillaAI/116279201448628866

@rootwyrm @davidgerard

@troed @reflex @davidgerard yes. Because, surprisingly, people hate it when they can't trust information, can't trust videos, can't get through phone systems, have systems that no longer function, and the list goes on.

So your entire argument is because people were mean to a shitty organization pushing shitty software that has made the experience objectively worse, everyone is wrong?

@troed @reflex @davidgerard like, dude, I get it. You wanna be today's main character. WOO CLOUT.

But there's already two other people vying for that and you're not going to one-up either of them.

@rootwyrm

Your posts proves all the points that need to be proven here.

@reflex @davidgerard

@troed @reflex @davidgerard no? You're the one who's all debate me bro.
So debate me bro. Debate me.
Come on. I'm just asking questions.
Debate me bro.
Debate me bro or you're lame.
You're the one who said we had to have a discussion bro.
Why are you running away bro?
@rootwyrm @troed @davidgerard There are reasons they want algorithms and AI, those are forces they can control. Losing a debate is no big deal when you can algorithmically bury the evidence and refer to "grok" as the decider of facts.
@troed @rootwyrm @davidgerard I mean I'm not going to do your homework for you, but the most AI friendly polls have consistently shown this to be true. Furthermore, the responses to that post are very indicative of why fedi absolutely should be considered, it's more aligned with public sentiment than algorithmic corporate platforms.

@reflex

No, this has not consistently been shown to be true. You might think so because Mastodon - really - is a little weird bubble here.

I'll take my own profession as an example. I'm a very senior software dev turned cybersec. Walk outside of Mastodon and there's no question whatsoever that LLMs are _useful_ both in software dev and as both reverse engineering and red teaming agents.

Here's Linux kernel devs:

https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

Bagder of cURL fame says the same thing.

If you really believe differently I'm sorry but you're ... out of touch.

@rootwyrm @davidgerard

AI bug reports went from junk to legit overnight, says Linux kernel czar

Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

The Register

@troed @rootwyrm @davidgerard Dudebro, my career is verifiable, I spent 11 years off and on at Microsoft in several roles including Windows kernel engineer, I spent 8.5 years at Amazon including being the architect of the "AI" driven network we used for the AmazonGo stores. I have tons of former colleagues in both places.

They do NOT agree with your experience and are miserable having to 'prove' AI's usefulness to execs for the quarterly reports.

It's shit. The emperor has no clothes.

@troed @rootwyrm @davidgerard The luxury of leaving big tech is that I don't have to pretend a shit sandwich is an amazing hamburger just to keep my job.

@reflex

I didn't source _my_ experience now, did I? What were my sources again?

Here's Bagder from today: https://mastodon.social/@bagder/116354106408236089

Or let's take a well known writer just now:

https://mastodon.social/@harrymccracken/116358878961334760

@rootwyrm @davidgerard

@troed @rootwyrm @davidgerard I don't care what you source, if you are seeing value and not just lying to keep your job, that's a statement on your skill level, senior or not.

And since the Claude code leak, it should be obvious to everyone just how much hot garbage these models produce. If you can't acknowledge that you have only discredited yourself.

@reflex

Yeah, I'm seeing value. I was amazed at how well an LLM reverse engineered a proprietary binary and came to the same conclusions as I did in a tenth of the time.

I'm the one pushing _back_ towards AI usage where I'm currently consulting, btw. I don't consider it useful where secure coding is needed.

But not all code needs that.

@rootwyrm @davidgerard

@troed @reflex @davidgerard uhhuh, love the intentional deletion of context and refusal to discuss or accept any facts. While trying to argue with some of the leading experts in the world.

So what, are you a sock puppet for some slop peddler child? Or just a worthless slop peddler desperate to preserve your self-esteem as you get completely and utterly destroyed on the Internet?

@rootwyrm @troed @davidgerard What's amusing to me is that while it can definitely damage a career to speak against AI publicly, it does not require you to actively seek out discussions on it and debate people pointing out why it sucks.

@reflex @davidgerard gotta love slop gobbling clowns that insist they are entitled to sealioning with multiple experts.

Including one of the guys who helps journalists unwind all the fun, fabricated, fantastical financials, hardware hilarity, infrastructure idiocy, and data center dishonesty.
Who also happens to have done stuff with real NN since the 90's.

@rootwyrm @davidgerard Watching colleagues promote this who I know know better, including in private conversations we've had in the past couple years has been disheartening. I get keeping your head down to protect your job, but actively cheer leading for what I assume is a chance at a promotion is incredibly unethical.

Even if the tool did all the things they say, it's built by fascists, pushes fascism, and consumes resources exorbitantly. We have an obligation to reject it.

@rootwyrm

Not that I believe you to be debating seriously, but ok:

I am one of those persons who wrote back-propagating neural networks in the early nineties. I'm also an LLM-for-coding skeptic that has changed his mind after actually having put my convictions to the test and used it in various situations. Oh, and yeah, I am that expert in *hands waving* shitloads of stuff. You're not argument-from-authority-winning here.

@reflex

I select which assignments I take on. I have no need to cheer lead anything. As I wrote to you, but I'm not sure you understood, I'm the person saying _no_ to AI development where I consult right now (since it's in a sector that needs secure coding).

I also use Mistral LLMs, since they do care about what data they train on.

@davidgerard

@troed @rootwyrm @davidgerard In other words, yes you should know better, and likely do know better, but it's more profitable to pretend the sky is purple. If you are as experienced as you claim and you are busy arguing against the fact that AI generated code is a shit sandwich, you are literally lying for some personal motive.

There is no 'debate' to be had here. It's shit code. If you are going to claim it's not when we can all see Claude Code, you are either incompetent or lying.

@reflex

Or I'm simply more competent than you are, and can accept the fact that LLMs are useful for a lot of tasks even though they're not suitable for some.

I laugh at everyone producing public SaaS written with LLMs - since I'll be able to charge lots of money fixing all those security holes.

The two mods for Hytale and the local meshtastic network planner I've written with LLM aids are doing just fine though.

I guess in your world everything is always either/or.

@rootwyrm @davidgerard

@troed @rootwyrm @davidgerard When you look at the code leak for Claude Code, do you honestly see high quality code that you'd be happy to have your large scale projects utilize?

@reflex

What did I just write about those that use LLMs to put SaaS into public production ... ?

That doesn't mean they're useless. Here - watch me hack an IoT device using very low level reverse engineering. Then recall what I said about being impressed with the RE an LLM did on a fully proprietary binary.

I know my stuff. I'm saying they're useful. That doesn't mean that they're 100x coders taking over the world.

https://video.troed.se/w/kfbeBKcDuZt2KcyMx2kfsq

@rootwyrm @davidgerard

Hacking the Minut M2 IoT sensor

PeerTube
@troed @rootwyrm @davidgerard Again, when you look at the Claude Code leak, the one written by those who understand the tool far better than you or I, do you see code *YOU* would want to use and maintain in production?

@reflex

see previous reply

@troed In other words you won't answer. You know how the answer would reflect on you.

I think we are done here. Good luck with your slop.

@reflex No, I'm saying I have answered and I'm waiting for you to acquire knowledge before repeating the same things over and over.

The difference between us is that I put my beliefs to the test. You haven't.

Of course I maintain the three apps I've made with the aid of LLMs and published. They're even open source - anyone's free to have a laugh. They do the job, and they're not security critical.

What other areas in society do you believe benefit from you voicing your uninformed opinions on?

@troed @reflex "they do the job, they're not security critical" you are producing pollution. You are shitting in the reservoir and telling everyone else it's good to shit in the reservoir too because it's easier than using the fucking toilet, which is bullshit on several levels

@troed @reflex do you have no qualms about supporting a fraudulent product? whether or not the stochastic LLM device happens to generate output that you find personally useful is irrelevant to the larger issue of fraud: LLMs are sold as "intelligent" and "artificial intelligence" even though they work by a non-thinking mechanism, and intentionally lack the ability to discriminate between truthful and false information—one of the key features of genuinely thinking beings. They freely confabulation truth with garbage, yet they're marketed and sold as if they were smarter than human beings.

Doesn't that fraud bother you? Why are you supporting this scam?

@troed @reflex @davidgerard AJFJSDKLFJKSDLFD

ROLLING AROUND LAUGHING.

Son, you're dealing with people who know more about this shit than your trolling ass will ever be capable of learning.

But please. Do misquote Daniel. I'm sure he loves people putting words in his mouth that are the exact opposite of what he said.

https://www.theregister.com/2026/01/21/curl_ends_bug_bounty/

Curl shutters bug bounty program to remove incentive for submitting AI slop

: Maintainer hopes hackers send bug reports anyway, will keep shaming ‘silly' ones

The Register