RE: https://infosec.exchange/@cR0w/116244751172093572
I'm so sorry in advance for this long post, but this has been on my mind lately and I want others' thoughts on it.
I think I agree with the person I'm quoting, but I can't be sure because despite using it, I'm starting to hate "AI" as a term. It's not their fault that the definition has been mutilated, but I have to wonder if they're against AI in theory or in it's current form.
My stance is against any sort of "AI" that steals the work of others and either claims it as original, or uses it to modify someone's otherwise untainted creation. I assume that's what they're referring to, in which case I 100% agree.
That said, I'm unaware of any issues with machine learning itself when ethical and, of course, not based around widespread theft. So, OP, what do you think about using such programs to automate painfully tedious tasks? This wouldn't steal from others or remove any creativity from a work, only use a algorithm to, for instance, display rough subtitles as a placeholder for, or in absence of, proper ones. It could also be used as a starting point for a person to later refine. This kind of thing has been around for years, in the same way text-to-speech voices have helped the vision impaired and even ADHDers like myself (I have trouble reading long-ass academic essays).
Previous examples of this tech haven't caused harm, so if a system for generating subtitles is FLOSS and improves with usage (I think that's what machine learning means?), then it's a good thing, right? How do I distinguish between such software and the dystopian slop machines we're all rallying against?
@cloudskater You bring up another pain point in the AI mess we're in and that's the definition of AI. I don't consider traditional machine learning itself to be harmful. However, generative AI and agentic AI systems are inherently terrible, or at least extremely inefficient, for anything besides some lulz. And wealth extraction, of course.
Summarization of papers I think is something that can be done responsibly. In fact, I like what @nopatience has done with summarizing posts for an RSS feed. It's not for you to read the summary instead of the original post, but so you can decide if you want to read the post.
Honestly, it's tough to avoid all AI systems these days, especially if you work in tech. I wouldn't stress about that part. If you focus on the accuracy, consistency, and efficiency of a system, you should naturally weed out most AI garbage. Or at least that's been my experience so far.
First of all, thanks for including me in the category of doing something "reasonable" with AI.
I need to say this; using GenAI (LLMs) is something I do genuinely struggle with every day. I have, as cR0w hints at, found a use-case where it brings me immense value in something that I would otherwise have to spend precious energy and time on doing manually; finding "great" content to actually spend time digesting and reading properly.
While I have found something that does bring me much value, I still struggle. The ethical dilemma of using the bigger models knowing in full that this is quite literally raping the planet of precious resources.
But the technology itself will stay and this box will not close. So for me it has now become a challenge of finding more "ethically" sourced models and learning how the current "problem space" can be broken down into smaller pieces that these other models can solve equally well. (That is non-trivial to achieve!)
Another ethical dilemma is knowing that in order for me to be able to ask the model to evaluate content according to my "rules", they have had to be trained on material that has been in no other words ... been stolen.
I dont' want to use the models for copying other peoples work, or generating more slop. But I want to use them for something that I genuinely believe brings value.
These concerns, challenges and individual value propositions are certainly not easy to resolve or balance.
TL;DR - I have opinions about this, and I struggle.
@nopatience @cR0w @cloudskater
I'm increasingly thinking that AI is a symptom and not the root problem. Leaving aside that the obvious motivation for it is the destruction of the modern peasantry, I would cautiously suggest that most of the problems with it are really caused by indifference to sustainable systems.
Everything is so optimized around short-term market gains that literally everything else is getting tossed by the wayside.
@codinghorror @nopatience @cR0w @cloudskater
I meant "obvious motivation" in more of the grand sense. The reason that megacorps and banks are driving dumpsters worth of money into the AI fire pit is that they hope it will eventually be able to cut all of us from their budgets.
As an educator, though, this position concerns me a little bit. There is educational value in having the human visit and synthesize the information from those sources. There's a growing body of literature showing that AI usage - likely because of this sort of cognitive offloading - causes folks to deskill.
@nerdpr0f @codinghorror @nopatience @cR0w @cloudskater Did you read Adam's post that touches on this? I think he worded some of it well. It's kinda tough for me to get my arms around - the bulk of it all.
https://adamthropology.ghost.io/a-small-complaint-about-the-current-state-the-world/
Reading a blog post recently released by Matt Shumer, Something Big is Happening, has confirmed fears I have personally been carrying for many years now. This growth in AI has been torrential in changing jobs, work, development of hard sciences (Physics, Chemistry and its associates, computer engineering, etc.) with the
@codinghorror @Sempf @nerdpr0f @nopatience @cR0w @cloudskater Gonna just leave this here for the "It's just a tool" claim.
I gotta say, @cR0w you started a howl.