
Attached: 1 image the AI bros are OUTRAGED and wish to speak to the MANAGER
@rootwyrm @davidgerard A bit of a tangent, but the idea that "mastodon is hostile to AI" as though that's some odd thing when polls show the public is overwhelmingly hostile to AI leads these bros to question Mastodon/fedi and what needs to happen to change that sentiment, vs them asking the real question: Why do other social networks not demonstrate the hostility to AI that masto/fedi do given that the latter seems to align with public sentiment?
Ie: the complaint is the lack of manipulation.
@reflex @davidgerard to put an even finer point on it still, because it's very easy to:
Replace 'AI' with 'fascism' and they would be saying the same things. Replace it with 'Nazis' and no surprise, they are saying *exactly* that too. (But don't you dare call them Nazis!)
They love BSky because it platforms and defends bigots, terrorists, and Nazis and is at war with it's own users over it.
@reflex @rootwyrm @davidgerard
that's a bingo

"The contradiction between use and trust of AI is striking. Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust," said Chetan Jaiswal, Ph.D., Associate Professor of Computer Science and Associate Chair, Department of Computing, Quinnipiac University School of Computing and Engineering.
@reflex @jplebreton @davidgerard actually, those numbers are completely fabricated. It's way, way more than "55%". Way more. Like all polls, it was rigged as much as possible so the chatbot vendors could use it for marketing. "LOOK HOW MUCH EVERYONE LOVES US!"
The real numbers are basically: every fucking one of your neighbors would not hesitate for one second to firebomb a data center and ram a railroad spike through Sham Altman's skull.
@reflex @rootwyrm @davidgerard
They want to force their algorithms on us.
@reflex @rootwyrm @davidgerard
I wonder if they can find some way to force their algorithms on the whole Fediverse?
@davidgerard @reflex @rootwyrm
I kind of have the feeling that Mastodon.social is a threat to the whole concept of Mastodon and the Fediverse. Am I the only one who feels that way?
I'm guessing part of the problem is that you really believe "the public is overwhelmingly hostile to AI" to be true?
It's difficult to ask, for example, public European institutions to consider Mastodon when having read the comments to this post by Mozilla:
@troed @reflex @davidgerard yes. Because, surprisingly, people hate it when they can't trust information, can't trust videos, can't get through phone systems, have systems that no longer function, and the list goes on.
So your entire argument is because people were mean to a shitty organization pushing shitty software that has made the experience objectively worse, everyone is wrong?
@troed @reflex @davidgerard like, dude, I get it. You wanna be today's main character. WOO CLOUT.
But there's already two other people vying for that and you're not going to one-up either of them.
No, this has not consistently been shown to be true. You might think so because Mastodon - really - is a little weird bubble here.
I'll take my own profession as an example. I'm a very senior software dev turned cybersec. Walk outside of Mastodon and there's no question whatsoever that LLMs are _useful_ both in software dev and as both reverse engineering and red teaming agents.
Here's Linux kernel devs:
https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
Bagder of cURL fame says the same thing.
If you really believe differently I'm sorry but you're ... out of touch.
@troed @rootwyrm @davidgerard Dudebro, my career is verifiable, I spent 11 years off and on at Microsoft in several roles including Windows kernel engineer, I spent 8.5 years at Amazon including being the architect of the "AI" driven network we used for the AmazonGo stores. I have tons of former colleagues in both places.
They do NOT agree with your experience and are miserable having to 'prove' AI's usefulness to execs for the quarterly reports.
It's shit. The emperor has no clothes.
I didn't source _my_ experience now, did I? What were my sources again?
Here's Bagder from today: https://mastodon.social/@bagder/116354106408236089
Or let's take a well known writer just now:
@troed @rootwyrm @davidgerard I don't care what you source, if you are seeing value and not just lying to keep your job, that's a statement on your skill level, senior or not.
And since the Claude code leak, it should be obvious to everyone just how much hot garbage these models produce. If you can't acknowledge that you have only discredited yourself.
Yeah, I'm seeing value. I was amazed at how well an LLM reverse engineered a proprietary binary and came to the same conclusions as I did in a tenth of the time.
I'm the one pushing _back_ towards AI usage where I'm currently consulting, btw. I don't consider it useful where secure coding is needed.
But not all code needs that.
@troed @reflex @davidgerard uhhuh, love the intentional deletion of context and refusal to discuss or accept any facts. While trying to argue with some of the leading experts in the world.
So what, are you a sock puppet for some slop peddler child? Or just a worthless slop peddler desperate to preserve your self-esteem as you get completely and utterly destroyed on the Internet?
@reflex @davidgerard gotta love slop gobbling clowns that insist they are entitled to sealioning with multiple experts.
Including one of the guys who helps journalists unwind all the fun, fabricated, fantastical financials, hardware hilarity, infrastructure idiocy, and data center dishonesty.
Who also happens to have done stuff with real NN since the 90's.
@rootwyrm @davidgerard Watching colleagues promote this who I know know better, including in private conversations we've had in the past couple years has been disheartening. I get keeping your head down to protect your job, but actively cheer leading for what I assume is a chance at a promotion is incredibly unethical.
Even if the tool did all the things they say, it's built by fascists, pushes fascism, and consumes resources exorbitantly. We have an obligation to reject it.
Not that I believe you to be debating seriously, but ok:
I am one of those persons who wrote back-propagating neural networks in the early nineties. I'm also an LLM-for-coding skeptic that has changed his mind after actually having put my convictions to the test and used it in various situations. Oh, and yeah, I am that expert in *hands waving* shitloads of stuff. You're not argument-from-authority-winning here.
I select which assignments I take on. I have no need to cheer lead anything. As I wrote to you, but I'm not sure you understood, I'm the person saying _no_ to AI development where I consult right now (since it's in a sector that needs secure coding).
I also use Mistral LLMs, since they do care about what data they train on.
@troed @rootwyrm @davidgerard In other words, yes you should know better, and likely do know better, but it's more profitable to pretend the sky is purple. If you are as experienced as you claim and you are busy arguing against the fact that AI generated code is a shit sandwich, you are literally lying for some personal motive.
There is no 'debate' to be had here. It's shit code. If you are going to claim it's not when we can all see Claude Code, you are either incompetent or lying.
Or I'm simply more competent than you are, and can accept the fact that LLMs are useful for a lot of tasks even though they're not suitable for some.
I laugh at everyone producing public SaaS written with LLMs - since I'll be able to charge lots of money fixing all those security holes.
The two mods for Hytale and the local meshtastic network planner I've written with LLM aids are doing just fine though.
I guess in your world everything is always either/or.
What did I just write about those that use LLMs to put SaaS into public production ... ?
That doesn't mean they're useless. Here - watch me hack an IoT device using very low level reverse engineering. Then recall what I said about being impressed with the RE an LLM did on a fully proprietary binary.
I know my stuff. I'm saying they're useful. That doesn't mean that they're 100x coders taking over the world.

@troed @reflex @davidgerard AJFJSDKLFJKSDLFD
ROLLING AROUND LAUGHING.
Son, you're dealing with people who know more about this shit than your trolling ass will ever be capable of learning.
But please. Do misquote Daniel. I'm sure he loves people putting words in his mouth that are the exact opposite of what he said.
https://www.theregister.com/2026/01/21/curl_ends_bug_bounty/
@reflex @rootwyrm @davidgerard
well they think if they control the feeds they control the public sentiment...