
Attached: 1 image the AI bros are OUTRAGED and wish to speak to the MANAGER
@rootwyrm @davidgerard A bit of a tangent, but the idea that "mastodon is hostile to AI" as though that's some odd thing when polls show the public is overwhelmingly hostile to AI leads these bros to question Mastodon/fedi and what needs to happen to change that sentiment, vs them asking the real question: Why do other social networks not demonstrate the hostility to AI that masto/fedi do given that the latter seems to align with public sentiment?
Ie: the complaint is the lack of manipulation.
I'm guessing part of the problem is that you really believe "the public is overwhelmingly hostile to AI" to be true?
It's difficult to ask, for example, public European institutions to consider Mastodon when having read the comments to this post by Mozilla:
No, this has not consistently been shown to be true. You might think so because Mastodon - really - is a little weird bubble here.
I'll take my own profession as an example. I'm a very senior software dev turned cybersec. Walk outside of Mastodon and there's no question whatsoever that LLMs are _useful_ both in software dev and as both reverse engineering and red teaming agents.
Here's Linux kernel devs:
https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
Bagder of cURL fame says the same thing.
If you really believe differently I'm sorry but you're ... out of touch.
@troed @rootwyrm @davidgerard Dudebro, my career is verifiable, I spent 11 years off and on at Microsoft in several roles including Windows kernel engineer, I spent 8.5 years at Amazon including being the architect of the "AI" driven network we used for the AmazonGo stores. I have tons of former colleagues in both places.
They do NOT agree with your experience and are miserable having to 'prove' AI's usefulness to execs for the quarterly reports.
It's shit. The emperor has no clothes.
I didn't source _my_ experience now, did I? What were my sources again?
Here's Bagder from today: https://mastodon.social/@bagder/116354106408236089
Or let's take a well known writer just now:
@troed @reflex @davidgerard uhhuh, love the intentional deletion of context and refusal to discuss or accept any facts. While trying to argue with some of the leading experts in the world.
So what, are you a sock puppet for some slop peddler child? Or just a worthless slop peddler desperate to preserve your self-esteem as you get completely and utterly destroyed on the Internet?
@reflex @davidgerard gotta love slop gobbling clowns that insist they are entitled to sealioning with multiple experts.
Including one of the guys who helps journalists unwind all the fun, fabricated, fantastical financials, hardware hilarity, infrastructure idiocy, and data center dishonesty.
Who also happens to have done stuff with real NN since the 90's.
Not that I believe you to be debating seriously, but ok:
I am one of those persons who wrote back-propagating neural networks in the early nineties. I'm also an LLM-for-coding skeptic that has changed his mind after actually having put my convictions to the test and used it in various situations. Oh, and yeah, I am that expert in *hands waving* shitloads of stuff. You're not argument-from-authority-winning here.
I select which assignments I take on. I have no need to cheer lead anything. As I wrote to you, but I'm not sure you understood, I'm the person saying _no_ to AI development where I consult right now (since it's in a sector that needs secure coding).
I also use Mistral LLMs, since they do care about what data they train on.
@troed @rootwyrm @davidgerard In other words, yes you should know better, and likely do know better, but it's more profitable to pretend the sky is purple. If you are as experienced as you claim and you are busy arguing against the fact that AI generated code is a shit sandwich, you are literally lying for some personal motive.
There is no 'debate' to be had here. It's shit code. If you are going to claim it's not when we can all see Claude Code, you are either incompetent or lying.
Or I'm simply more competent than you are, and can accept the fact that LLMs are useful for a lot of tasks even though they're not suitable for some.
I laugh at everyone producing public SaaS written with LLMs - since I'll be able to charge lots of money fixing all those security holes.
The two mods for Hytale and the local meshtastic network planner I've written with LLM aids are doing just fine though.
I guess in your world everything is always either/or.
What did I just write about those that use LLMs to put SaaS into public production ... ?
That doesn't mean they're useless. Here - watch me hack an IoT device using very low level reverse engineering. Then recall what I said about being impressed with the RE an LLM did on a fully proprietary binary.
I know my stuff. I'm saying they're useful. That doesn't mean that they're 100x coders taking over the world.

see previous reply
@troed In other words you won't answer. You know how the answer would reflect on you.
I think we are done here. Good luck with your slop.
@reflex No, I'm saying I have answered and I'm waiting for you to acquire knowledge before repeating the same things over and over.
The difference between us is that I put my beliefs to the test. You haven't.
Of course I maintain the three apps I've made with the aid of LLMs and published. They're even open source - anyone's free to have a laugh. They do the job, and they're not security critical.
What other areas in society do you believe benefit from you voicing your uninformed opinions on?
@troed @reflex do you have no qualms about supporting a fraudulent product? whether or not the stochastic LLM device happens to generate output that you find personally useful is irrelevant to the larger issue of fraud: LLMs are sold as "intelligent" and "artificial intelligence" even though they work by a non-thinking mechanism, and intentionally lack the ability to discriminate between truthful and false informationâone of the key features of genuinely thinking beings. They freely confabulation truth with garbage, yet they're marketed and sold as if they were smarter than human beings.
Doesn't that fraud bother you? Why are you supporting this scam?
@mxchara I'm not really sure what to focus on in this reply :D I spent a considerable amount of time studying the topic of consciousness a bit more than a decade ago, and in general us humans assume way too much about our own capabilities.
But to your point about LLMs - I haven't seen Mistral AI claiming anything of the sorts, and it's their models I'm using. They're in no way better developers than I am, but they're very quick at a lot of tasks which means I have to write less "boilerplate" and can instead focus on the important parts of the applications.
In some ways it's like using a high level language instead of writing everything in assembly code.
As I said, this is a topic I have studied extensively. AFAIK science is not yet at the point where we can define neither intelligence nor what gives rise to the feeling that "we" exist as beings capable of introspection.
Back in the 50's we thought we would have natural conversations with computers in the next few years, while we believed they would never be able to beat us at chess. As soon as we solved how to do chess we stopped considering it as something that needed high levels of intelligence - but natural language took way longer.
My go-to regarding consciousness has been Blackmore's books on the subject. I'm thus not a dualist, and if that's the road we're going down we'll likely never be able to agree on even simple definitions ;)
@troed @reflex If you've studied it extensively then you must surely be able to communicate your understanding, with a simple working definition of what "intelligence" is. I think that's important when discussing the activities of profit-seeking entities looking specifically to monetize the concept of "intelligence". I say that LLMs are not actually intended to be intelligentâintelligent behavior would interfere with the desired use-cases of the LLM devices. They are intended to be repetitive, predictable in their responses, devoid of true creativity and ability to generalize in favor of the ability to regurgitate massive amounts of undigested training-text as quickly as possible.
Surely these devices aren't intelligent. I see no evidence that they are meant to behave intelligently. You seem to think otherwise. Hence it would be nice to know: how do you define intelligence?
@reflex nah I think @troed merely sensed danger in my line of questioning and decided to retreat into a familiar online pose: blustery, angry, "everything you say about me is automatically wrong and everything I say about you is totally correct," etc. Happens all the time. It's okay, @troed, you can calm down now. Have some more peanuts, if it helps.
What's really annoying about people who decide to behave like this, is that they're no longer able to answer basic questions, for now they perceive ALL questions as threats to their ego.