You need to be able to empathize (not sympathize) with people, especially adversaries.
@infoseclogger @InsiderTreat @cR0w
People like to see themselves in things. We anthropomorphize (twice in one day, no spell check...nailed it) basically everything we come in contact with.
Who wouldn't want to live a world of "Beauty and the Beast" were you can just talk to the candlesticks and dishes to have things happen?
People try to fantasize and make exciting really basic and boring stuff.
Creating fantastic situations in order to account for things is fine. But assuming the fantasy is reality... is just silly.
@cR0w the thing that pisses me off the most is how there are people who argue in favor of AI art, and compare it to the real deal.
The people who make art of any kind practice for days, weeks, months, years. and some glorified markov bot sucks all of that up, without asking, without permission, without compensation, and somehow you think that's better? Its an injustice. Every AI datacenter deserves mass quantities of thermite.
Don't forget, all the starving people who could be artists, who want to be artists, to create, to leave their handprint on the cave wall... Who cannot.
These same people who have become wage slaves, sold on lies that they too may now create the masterpieces of their dreams. AI is not being weaponized to end suffering. It's being weaponized to blind the everyday man of the shackles that bind.
@da_667 @rusty__shackleford @cR0w I feel like a lot of it comes back to the fact that too many people are not willing to inconvenience themselves by voting with their wallet. For far too long people have been complaining about things getting worse while continuing to pay the ever growing prices of the things that they are complaining about.
Perhaps really quality doesn't matter to our society 🥲
Dear FSM....
Please restart this timeline in a way that leads to apps being produced with Authentic, *Artisanal* Honey badger code, instead of stolen code shat out by the lying plagiarism machines
Please and thank you
- ForIamCJ
That... could be the entire plot of a 12 volume Chuck Tingle book series 😉
and still a better timeline than what we're living through now
@cR0w I work in the culture sector. I see writers who have no problem using genAI to create images.
And I see people who loudly defend visual art who have no problem using LLMs to "help" with their writing.
IMHO generated artificial "intelligence" is the biggest marketing grift since big tobacco. Except the information about how the tools function and potential harms like deskilling is easily available. People just don't bother asking any questions.
We'll die on the hill of convenience.
Yesterday, I was forced to deal with Paypal's deteriorated customer "assistance", now dominated by a moronic AI bot. I was both shocked and amused when I was forced to listed to the typical "this call may be recorded for quality control and training", only to hear actual code parameters appended to it. Don't these companies understand the damage they are doing to their own brands?
@cR0w @jrovu See also "if you're not literally in a concentration camp waiting to be executed, it could be worse, we don't deserve it as good as we have it."
For additional examples, call my mom.
(Irony: replyguy answers your question. The people who think you're irrational don't read for comprehension, failing to understand the fairly non-ranty nature of the post. TBH, the "high ground" ad hominem attack and lack of comprehension are also hallmarks of AI generated bot replies too.)
@cR0w @jrovu Agree, I know I really should just let it go. But with this one, It's not the writing style, so much as the complete gormlessness of the angle of attack. I need to get used to it.
POE's law, but for AI bots; an AI response is indistinguishable from a lazy writer who didn't bother to read all of the thing they're replying to.
RE: https://infosec.exchange/@cR0w/116244751172093572
I'm so sorry in advance for this long post, but this has been on my mind lately and I want others' thoughts on it.
I think I agree with the person I'm quoting, but I can't be sure because despite using it, I'm starting to hate "AI" as a term. It's not their fault that the definition has been mutilated, but I have to wonder if they're against AI in theory or in it's current form.
My stance is against any sort of "AI" that steals the work of others and either claims it as original, or uses it to modify someone's otherwise untainted creation. I assume that's what they're referring to, in which case I 100% agree.
That said, I'm unaware of any issues with machine learning itself when ethical and, of course, not based around widespread theft. So, OP, what do you think about using such programs to automate painfully tedious tasks? This wouldn't steal from others or remove any creativity from a work, only use a algorithm to, for instance, display rough subtitles as a placeholder for, or in absence of, proper ones. It could also be used as a starting point for a person to later refine. This kind of thing has been around for years, in the same way text-to-speech voices have helped the vision impaired and even ADHDers like myself (I have trouble reading long-ass academic essays).
Previous examples of this tech haven't caused harm, so if a system for generating subtitles is FLOSS and improves with usage (I think that's what machine learning means?), then it's a good thing, right? How do I distinguish between such software and the dystopian slop machines we're all rallying against?
@cloudskater You bring up another pain point in the AI mess we're in and that's the definition of AI. I don't consider traditional machine learning itself to be harmful. However, generative AI and agentic AI systems are inherently terrible, or at least extremely inefficient, for anything besides some lulz. And wealth extraction, of course.
Summarization of papers I think is something that can be done responsibly. In fact, I like what @nopatience has done with summarizing posts for an RSS feed. It's not for you to read the summary instead of the original post, but so you can decide if you want to read the post.
Honestly, it's tough to avoid all AI systems these days, especially if you work in tech. I wouldn't stress about that part. If you focus on the accuracy, consistency, and efficiency of a system, you should naturally weed out most AI garbage. Or at least that's been my experience so far.
First of all, thanks for including me in the category of doing something "reasonable" with AI.
I need to say this; using GenAI (LLMs) is something I do genuinely struggle with every day. I have, as cR0w hints at, found a use-case where it brings me immense value in something that I would otherwise have to spend precious energy and time on doing manually; finding "great" content to actually spend time digesting and reading properly.
While I have found something that does bring me much value, I still struggle. The ethical dilemma of using the bigger models knowing in full that this is quite literally raping the planet of precious resources.
But the technology itself will stay and this box will not close. So for me it has now become a challenge of finding more "ethically" sourced models and learning how the current "problem space" can be broken down into smaller pieces that these other models can solve equally well. (That is non-trivial to achieve!)
Another ethical dilemma is knowing that in order for me to be able to ask the model to evaluate content according to my "rules", they have had to be trained on material that has been in no other words ... been stolen.
I dont' want to use the models for copying other peoples work, or generating more slop. But I want to use them for something that I genuinely believe brings value.
These concerns, challenges and individual value propositions are certainly not easy to resolve or balance.
TL;DR - I have opinions about this, and I struggle.

Infographics on the distribution of wealth in America, highlighting both the inequality and the difference between our perception of inequality and the actua...
@codinghorror Excellent video. That's the real type of infographics, not that bullshit we get to see when companies want to promote something and call it infographics. Rather... it's just a marketing piece.
I digress.
This was a good video, thanks for sharing! It's fucked up.
@codinghorror And it begs the question. Can GenAI somehow support/help in facilitating a more equitable distribution of wealth?
Perhaps not by using the big models, but what if locally hosted models, or domestically hosted models become more generally available. What if the state were to provide access to basic models?
And in so doing adding a bit of extra tax on income to support the development and training of models?
I think it's exciting to think of opportunities and how access to LLMs can be made more generally available and especially doing so ethically with respect to the resources requirements etc etc.
@cloudskater We are in the 4th wave of AI, and it started with its first wave in the 60's, while the 70's was the second, and 80's were the third. There was an AI winter for 20 years starting in 2000 or so until it came up again.
Probably the biggest issue is that what we're rallying against is weaponized AI, as it's nothing more than a neutral, big flip-off program that's coded, and usually, it's trained in such a way that it doesn't do exactly what you want it to do.
I dissent with your exposition and analysis of the two AI's — one an ordinary AI and the other an evil genie AI. I dissent because AI doesn't realistically operate in this manner, since one AI does not have that innate ability to leapfrog or supercede the commands of another directly through the same chat window.
Of course a person using the right tools could design a unique interface or specialized method to operate in this way. The question is why would anybody do that? Using only modern C.S. vernacular: it is always about what and why with computer science.
Oh, I found your link from @ cr0w's page and didn't feel like cross-posting. I personally cannot stand AI right now. It is both an encumbrance and it steals entry-level jobs. In their eyes we are never good enough as contributors. Those systems appropriate all the best parts of our abilities and give little to nothing in return when we provide the inputs. This is only my opinion; so don't murder me online.
@cR0w "I understand not being an absolutist against all things AI. It's wrong, but I understand."
THIS. YES!!!!