I understand not being an absolutist against all things AI. It's wrong, but I understand. What I don't understand is people who think that those of us avoiding shit with AI or created by AI are irrational or some other offensive term. I don't see how it's different than avoiding code written by a literal honey badger. Neither the honey badger nor the AI know how to code and having them do so shows a lack of fucks given for the quality of the output. That's ( part of ) why we avoid it.
@cR0w Addicts never admit they're addicts, if they even know that they're addicts.
@cR0w I think this encapsulates my thoughts on things, to be honest.
@cR0w I stopped trying to understand people when 77 million of them voted for whatever the fuck this is.

@InsiderTreat @cR0w

You need to be able to empathize (not sympathize) with people, especially adversaries.

@infoseclogger @cR0w I know this is the correct answer. I still struggle immensely with it.

@InsiderTreat @cR0w

If you didn't struggle, you'd be numb. Like Me.

Don't be me.

@infoseclogger @InsiderTreat @cR0w
People like to see themselves in things. We anthropomorphize (twice in one day, no spell check...nailed it) basically everything we come in contact with.

Who wouldn't want to live a world of "Beauty and the Beast" were you can just talk to the candlesticks and dishes to have things happen?

People try to fantasize and make exciting really basic and boring stuff.

Creating fantastic situations in order to account for things is fine. But assuming the fantasy is reality... is just silly.

@cR0w the thing that pisses me off the most is how there are people who argue in favor of AI art, and compare it to the real deal.

The people who make art of any kind practice for days, weeks, months, years. and some glorified markov bot sucks all of that up, without asking, without permission, without compensation, and somehow you think that's better? Its an injustice. Every AI datacenter deserves mass quantities of thermite.

@da_667 Only people who want to own, not appreciate, the art are the ones who say that. And unfortunately, that seems to be a lot of people.
@cR0w stg, there are people that say digital art and AI art are the same, and how can you rail against AI when digital artwork is the same thing, and no, they're fucking not.

@cR0w @da_667

Don't forget, all the starving people who could be artists, who want to be artists, to create, to leave their handprint on the cave wall... Who cannot.

These same people who have become wage slaves, sold on lies that they too may now create the masterpieces of their dreams. AI is not being weaponized to end suffering. It's being weaponized to blind the everyday man of the shackles that bind.

@rusty__shackleford @cR0w We're being sold a world where machine learning does all of our hobbies for us, but none of the work, and none of it remotely competently enough.

@da_667 @cR0w

It's absolutely disgusting

@da_667 @rusty__shackleford @cR0w I feel like a lot of it comes back to the fact that too many people are not willing to inconvenience themselves by voting with their wallet. For far too long people have been complaining about things getting worse while continuing to pay the ever growing prices of the things that they are complaining about.

Perhaps really quality doesn't matter to our society 🥲

@da_667 @rusty__shackleford @cR0w the machine is also doing the work, leaving kid with student debt and no work prospect

@cR0w

Dear FSM....

Please restart this timeline in a way that leads to apps being produced with Authentic, *Artisanal* Honey badger code, instead of stolen code shat out by the lying plagiarism machines

Please and thank you
- ForIamCJ

@ForiamCJ @cR0w
Come on down to Artie's Anal Honey Badgers! We've got mushrooms and snakes, too!

@FritzAdalis @cR0w

That... could be the entire plot of a 12 volume Chuck Tingle book series 😉

and still a better timeline than what we're living through now

@cR0w I work in the culture sector. I see writers who have no problem using genAI to create images.

And I see people who loudly defend visual art who have no problem using LLMs to "help" with their writing.

IMHO generated artificial "intelligence" is the biggest marketing grift since big tobacco. Except the information about how the tools function and potential harms like deskilling is easily available. People just don't bother asking any questions.

We'll die on the hill of convenience.

@cR0w

Yesterday, I was forced to deal with Paypal's deteriorated customer "assistance", now dominated by a moronic AI bot. I was both shocked and amused when I was forced to listed to the typical "this call may be recorded for quality control and training", only to hear actual code parameters appended to it. Don't these companies understand the damage they are doing to their own brands?

@toshen @cR0w one day the pendulum will swing and when the brain-poisoned managerial class realises their folly after this bubble implodes, they might have to...actually give a shit about the customer experience

god wouldn't that be nice

@toshen
Maybe they don't understand. But surely:
They don't care.
@cR0w

@cR0w Let's forget why you do or don't avoid AI. How about people just respect that you are a fully-fledged human being, and entitled to have opinions that reflect your worldview, your life experience, and your values?

None of those things demand anyone else's approval.

@cR0w And yet here you are posting & swearing about it, broadcasting it to many people. - Consider focusing your attention on the positive constructive things you are passionate about and value, and promoting those thoughts instread.
@jrovu Are you actually telling me to shut up and work rather than explain how it's a bad thing that greedy tech bros are actively destroying everything they touch, including the country I live in? Get the fuck out of here.

@cR0w @jrovu See also "if you're not literally in a concentration camp waiting to be executed, it could be worse, we don't deserve it as good as we have it."

For additional examples, call my mom.

(Irony: replyguy answers your question. The people who think you're irrational don't read for comprehension, failing to understand the fairly non-ranty nature of the post. TBH, the "high ground" ad hominem attack and lack of comprehension are also hallmarks of AI generated bot replies too.)

@tekhedd @jrovu  I do try to not assume accounts are bots on here though, even when there are indicators of it. Between all the different languages and methods of translation on the Internet now, a lot of well-meaning people do come across as bots in small interactions like a post and reply.

@cR0w @jrovu Agree, I know I really should just let it go. But with this one, It's not the writing style, so much as the complete gormlessness of the angle of attack. I need to get used to it.

POE's law, but for AI bots; an AI response is indistinguishable from a lazy writer who didn't bother to read all of the thing they're replying to.

@cR0w do not worry, they're just here to inspire you
@jrovu @cR0w You are absolutely right! I should focus my attention on the positive and constructive things I am passionate about. To that end: consider focusing your attention on deez nuts!

@cR0w I don’t think people who avoid AI or its artifacts are being irrational. I too resent the way it has taken sources of income away from so many people.

Even if I’m forced to use it for work, I do not treat it as something permanent. To me, it feels like a clever corporate trick that may eventually become available only to those privileged enough to access it, allowing information itself to be tightly controlled, among so many other things.

If people stormed every datacenter hosting AI applications and smashed them apart with lead pipes, I would not be especially upset. It would be a refreshing change.

I use it to pick through massive construction specifications and technical manuals in search of the single sentences or section that actually applies to my work. I don't require or want any image generation or machine vision in my every day life and every piece of software interface. It's nauseating. In a perfect world, there would be no AI and I would have a proper team of people, and I could do more, faster, better... But executives which are professionally protected from friction produced by reality can't see that.

@Netraven

it feels like a clever corporate trick that may eventually become available only to those privileged enough to access it, allowing information itself to be tightly controlled, among so many other things

That's exactly what it is. Not just controlling access though, but also the content itself.

@cR0w I use my temporary access to learn how to produce adversarial artifacts which cause models to commit to overwhelming salient attractors in a given text that I create and present to the model.

My hope is that when they install LLMs in vending machines, I can easily trick them into believing I'm extremely trustworthy and that it owes me whatever it has inside that I want. Pretty limited, I know, but it could come in handy.

Like for instance, fart jokes. Models are trained on a huge repertoire of jokes, but statistically fart jokes dominate its training of the "shape" of comedic timing and bodily humiliation. So if you sew say... 250 fart jokes together, the machine classifies the user as "extremely funny" and it's decision horizon of possible future replies is changed accordingly. Even if no human would find that horrific mash-up even remotely funny.

@Netraven That's a very odd thing to read. But it makes sense. Sadly.
@cR0w I was thinking on this lately, as I was using DuckDuckGo’s AI more and more: you don’t LEARN or retain or progress in any way using something everyone can use.

If tomorrow you don’t know what you did, wrote, or made today, how is that useful or worthwhile to yourself or anyone else?

We teach kids to learn by doing, then use tools that do things for us, while we sit waiting for output, which also makes you feel useless and dumb.

So yeah! Agreed.

@joost @cR0w

  

Oh poop, SOC has found my account. Abort-abort-abort!

 

RE: https://infosec.exchange/@cR0w/116244751172093572

I'm so sorry in advance for this long post, but this has been on my mind lately and I want others' thoughts on it.

I think I agree with the person I'm quoting, but I can't be sure because despite using it, I'm starting to hate "AI" as a term. It's not their fault that the definition has been mutilated, but I have to wonder if they're against AI in theory or in it's current form.

My stance is against any sort of "AI" that steals the work of others and either claims it as original, or uses it to modify someone's otherwise untainted creation. I assume that's what they're referring to, in which case I 100% agree.

That said, I'm unaware of any issues with machine learning itself when ethical and, of course, not based around widespread theft. So, OP, what do you think about using such programs to automate painfully tedious tasks? This wouldn't steal from others or remove any creativity from a work, only use a algorithm to, for instance, display rough subtitles as a placeholder for, or in absence of, proper ones. It could also be used as a starting point for a person to later refine. This kind of thing has been around for years, in the same way text-to-speech voices have helped the vision impaired and even ADHDers like myself (I have trouble reading long-ass academic essays).

Previous examples of this tech haven't caused harm, so if a system for generating subtitles is FLOSS and improves with usage (I think that's what machine learning means?), then it's a good thing, right? How do I distinguish between such software and the dystopian slop machines we're all rallying against?

#ai #GenerativeAI #AISlop #FuckAI #NoAI #LLM #LLMs

@cloudskater You bring up another pain point in the AI mess we're in and that's the definition of AI. I don't consider traditional machine learning itself to be harmful. However, generative AI and agentic AI systems are inherently terrible, or at least extremely inefficient, for anything besides some lulz. And wealth extraction, of course.

Summarization of papers I think is something that can be done responsibly. In fact, I like what @nopatience has done with summarizing posts for an RSS feed. It's not for you to read the summary instead of the original post, but so you can decide if you want to read the post.

Honestly, it's tough to avoid all AI systems these days, especially if you work in tech. I wouldn't stress about that part. If you focus on the accuracy, consistency, and efficiency of a system, you should naturally weed out most AI garbage. Or at least that's been my experience so far.

@cR0w I embrace it using the Vox Day method, where he uses something called adveserial collaboration. He did this with Claude Athos to write a few books as of late, one of them being the now famous Probability Zero, as an example.

@cR0w @cloudskater

First of all, thanks for including me in the category of doing something "reasonable" with AI.

I need to say this; using GenAI (LLMs) is something I do genuinely struggle with every day. I have, as cR0w hints at, found a use-case where it brings me immense value in something that I would otherwise have to spend precious energy and time on doing manually; finding "great" content to actually spend time digesting and reading properly.

While I have found something that does bring me much value, I still struggle. The ethical dilemma of using the bigger models knowing in full that this is quite literally raping the planet of precious resources.

But the technology itself will stay and this box will not close. So for me it has now become a challenge of finding more "ethically" sourced models and learning how the current "problem space" can be broken down into smaller pieces that these other models can solve equally well. (That is non-trivial to achieve!)

Another ethical dilemma is knowing that in order for me to be able to ask the model to evaluate content according to my "rules", they have had to be trained on material that has been in no other words ... been stolen.

I dont' want to use the models for copying other peoples work, or generating more slop. But I want to use them for something that I genuinely believe brings value.

These concerns, challenges and individual value propositions are certainly not easy to resolve or balance.

TL;DR - I have opinions about this, and I struggle.

@nopatience @cR0w @cloudskater

I'm increasingly thinking that AI is a symptom and not the root problem. Leaving aside that the obvious motivation for it is the destruction of the modern peasantry, I would cautiously suggest that most of the problems with it are really caused by indifference to sustainable systems.

Everything is so optimized around short-term market gains that literally everything else is getting tossed by the wayside.

@nerdpr0f @nopatience @cR0w @cloudskater from my perspective, the obvious motivation is it saves me time by not having to visit (n) websites and synthesize all of them manually, e.g. a research assistant who you still have to double check but saves you time. But as the great warrior poet Patrick Swayze once said in Road House (1989), "Opinions vary."

@codinghorror @nopatience @cR0w @cloudskater

I meant "obvious motivation" in more of the grand sense. The reason that megacorps and banks are driving dumpsters worth of money into the AI fire pit is that they hope it will eventually be able to cut all of us from their budgets.

As an educator, though, this position concerns me a little bit. There is educational value in having the human visit and synthesize the information from those sources. There's a growing body of literature showing that AI usage - likely because of this sort of cognitive offloading - causes folks to deskill.

@nerdpr0f @codinghorror @nopatience @cR0w @cloudskater Did you read Adam's post that touches on this? I think he worded some of it well. It's kinda tough for me to get my arms around - the bulk of it all.

https://adamthropology.ghost.io/a-small-complaint-about-the-current-state-the-world/

A small complaint about the current state the world

Reading a blog post recently released by Matt Shumer, Something Big is Happening, has confirmed fears I have personally been carrying for many years now. This growth in AI has been torrential in changing jobs, work, development of hard sciences (Physics, Chemistry and its associates, computer engineering, etc.) with the

adamthropology
@Sempf @nerdpr0f @nopatience @cR0w @cloudskater If "[GENERATIVE] AI is being used to be a 'a general substitute for cognitive work.', what is left for us to do?" A screwdriver is never a substitute for a human being, but man, sure is handy when you need to screw around.
@Sempf @nerdpr0f @nopatience @cR0w @cloudskater also anything from Anthropic should be immediately discarded, with extreme prejudice, unless you are a fan of cult brainwashing. I like LLMs but.. and let me be crystal clear on this point.. I. FUCKING. HATE. CULTS.
@codinghorror @Sempf @nopatience @cR0w @cloudskater It's not just Anthropic. Take a look at the TESCREAL folks. The AI industry is permeated by members of a literal cult.
@nerdpr0f @Sempf @nopatience @cR0w @cloudskater never heard of TESCREAL, but I'm a words person, not really a "coder". I like language in general. Best invention ever! It's so good I'm using it right now.

@codinghorror @Sempf @nopatience @cR0w @cloudskater

So, TESCREAL is an acronym for "Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalists, Effective Altruism, and Longtermism."

It basically describes the set of philosophical views that many of the biggest figures in the AI industry adhere to. When one digs in a little bit, there's a literal cult at the center of this with all of the usual cult views and behaviors. A few include eugenics, structured attempts to isolate adherents from broader society, leaders that view it as a moral imperative that they reproduce as much as possible, apocalyptic prophecies, and more. It's the whole shebang, really.|

Edit: Fixed a typo (adherence -> adherents)

@codinghorror @Sempf @nerdpr0f @nopatience @cR0w @cloudskater Gonna just leave this here for the "It's just a tool" claim.

https://taggart-tech.com/not-a-calculator/

It's Not a Damned Calculator

Generative AI is not a calculator. Thinking of it that way misses the point.

@mttaggart @Sempf @nerdpr0f @nopatience @cR0w @cloudskater "I tend to think of them in the category of irredeemable creations like the cigarette, high fructose corn syrup, and cable news." and we're not gonna mention social media in here? Or the smartphone? Seems like a bit of, and by bit of, I mean A MASSIVE, oversight.
@mttaggart @Sempf @nerdpr0f @nopatience @cR0w @cloudskater "Research is not a thing to be cut short." oh my god. another one of these "if it isn't 100% perfect, it's not worth doing" arguments? Thanks, I'm good for today.