@cwebber But is it that, though?
Or are people still looking at it from a reactionary point of view?
Are people using this technology in ways that it shouldn’t have been intended because it was sold to them by people who wanted to maximize profit by any means necessary.
This is like saying that HTML is the devel because Palantir uses it to code their websites.
It’s like saying the compass is evil because that’s how the slave ships navigated the Atlantic Ocean.
@majorlinux @cwebber I don't think so...
Some love AI and yes, I see people sing 'use AI against itself!' but frankly that still makes AI users dependent on it and less intelligent and critically-minded.
I just don't think you can build an activist rights-reclamation on the bones of grift, stolen rights and climate destruction.
@john Does it, though?
For some, you can’t lose what you didn’t have in the first place.
Now, I’m not saying (as I have mentioned before) that all of a sudden everyone is an artist. If you can’t draw, you can’t draw. Accept it!
But what I’m saying is that some people do have executive function or other cognitive function issues that have been solved through AI tools.
I use it to help take notes, document work I’ve done, and for reference of said notes.
@john I’ve used it to automate complex tasks that n8n or Ansible can’t really do reliably based on copious amounts of notes that I have.
There are tools out there that can help people and I feel that gets swept under the rug thanks to reactionary forces at play.
Nobody stops to think critically about the tools and what a future with them could look like that doesn’t involve the exploitative nature of capitalism.
That’s because we’re all too busy to stop and imagine a better world, period.
@Garonenur @john Or, and here me out, we create new tools that can leverage existing technologies that could help.
You make it seem like the exploitative way was the only way.
Again, this is where I say people lack imagination when it comes to seeing a society free of exploitation.
Just because OpenAI created something that destroys the planet doesn’t mean the people would have.
We have to build with intention, it profit.
@Garonenur So there aren’t case amounts of data that we have at our disposal that doesn’t have it?
We can’t carefully curate what’s being trained and how we train it?
Again, if you blame the tech and not the people behind it, you let them off the hook to exploit again while we are left with nothing because we chose to villainize the people who actually have solutions to get us out while holding those accountable.
@majorlinux @Garonenur I agree as someone with tourettes and serious ADHD there is a need for ethical software to assist those needing help with executive function, note-taking and scheduling, as well as critical reasoning FOR EVERY human.
I hear you but...
Neuroplasticity, habit forming are not in any way aided by outsourcing.
Start from scratch on ethical language models.
I can't excuse or mea culpa what we have to pay for or be culpable in using.
There's too much at stake in my opinion.
@john And that’s my whole argument!
Things can be done ethically and responsibly if the forethought is put in.
I’d have seriously lost my job at this point if I hadn’t found something that could at least help me better document processes.
I understand what’s at stake and is why I argue that we can do something about it without fully demonizing those who may need a little extra help getting through the day.
Also, I’d hardly say it’s outsourcing when I still have the notes that I can refer to.
@majorlinux don't want to sound personal or attack my :) yup, there's a different value proposition for everybody, no question.
However: "in the name of efficiency, thinking has become optional. AI can now take meeting notes, generate business plans, write emails, even solve ethical dilemmas... there’s a terrible cost: the erosion of critical thinking, the very skill that makes us human."
@john Yet, the thinking always stays with me.
I’m the one making decisions.
The only decision that is being outsourced is what file to put the note in.
At no point has a tool told me what to do.
Options maybe.
But never decisions.
I have to answer to what I have done.
What I decided.
I am solely responsible.
@majorlinux @cwebber But it's not anything like your analogies.
It's getting pissed at the use of textiles made from slave labor, not the compasses navigating slave ships.
It's getting pissed at using frameworks directly integrated with Palantir's ecosystem, not HTML
It's getting pissed at asbestos being put in the walls despite indications it might not be all that safe. Except the asbestos exacerbates fires rather than retarding them.
@kwazekwaze @majorlinux @cwebber I figure that humanity’s probably doomed by this technology and I expect to be first in line.
I am begging y’all to come up with some less hyperbolic analogies and proposals other than abstinence-only.
@marshray We’re only in this mess because we’ve allowed these tech bros to do everything they want with relative impunity.
It’s about time we finally hold them accountable.
What we’ve done up to this point has not been enough.
@marshray You sure about that? The US government, while being funded mostly by the working class, has decisions made by those who sit at the very top.
The policy makers are put there by the billionaire class and their lobbyists who do not craft policy for us, but for the capitalists to further enrich themselves.
This is the same for all of Western “democracy”.
We have never been in charge of anything in this country despite what our high school civics classes told us.
@majorlinux Yeah, totally.
But what I’m saying is that a lot of AI research is being done outside the US. Particularly big in PRC China, but most governments are funding it to some degree.
Try a web search for “AI model benchmarks” or similar. Here’s a query for benchmark results, mostly of *open source* models: https://huggingface.co/datasets?benchmark=benchmark%3Aofficial
There’ll be several non-US names (Qwen, Deepseek, Mistral, etc.) near the top.
IMO, Anthropic, OpenAI, the US ‘tech bros’ may have a lead (if any) measured in months, not years. Shutting them down won’t stop the technology.
@marshray @majorlinux @cwebber
You're saying humanity is doomed by this technology and calling an *asbestos* analogy hyperbole?
Begone.
The slave ship and Palantir analogies were not my choices.
@kwazekwaze @majorlinux @cwebber Sure, its hyperbolicity is arguable. But I do not think ‘asbestos that is also a fire accelerant’ is a particularly useful analogy.
As you request.
@marshray @majorlinux @cwebber
It's deleterious in both first and second order effects. Sounds like you're just quibbling.
Much appreciated.
@marshray Here's mine: keep resisting, and keep yelling about it. We won't see a shift against Big-Whatever, without a mass shift in perception. The more you rant about it, the more people will hear, and eventually change their perspective.
"This is like saying that HTML is the devel because Palantir uses it to code their websites.
It’s like saying the compass is evil because that’s how the slave ships navigated the Atlantic Ocean."
But it isn't AT ALL like that. Palantir did not invent HTML and slave traders didn't invent the compass. In neither case did they directly profit or gain power by the mass adoption of those technologies by society.
This is more like adopting and becoming dependent on a product or service specifically provided by Palantir or taking a job constructing or operating the ships owned by the slave trader.
@msh Neither did these companies create “artificial intelligence” nor the methods in which it was created.
All they did was take open tools and built the torment nexus because it would make them a ton of money.
Who’s to say we couldn’t also take the tools and make something that would have benefited humanity and the planet?
Earlier iterations of this same tech already exist and don’t exploit.
New versions exist that don’t exploit.
@msh So, instead of going after the tech, shouldn’t we be spending the resources going after those who have exploited it for profit and gain at the expensive of the working class?
If they weren’t allowed to dictate how things worked in society, we wouldn’t be here having this conversation in the first place.
@majorlinux the problem is that, without exception, every single technology and model and tool extant in the LLM based GenAI space was developed and brought forward by toxic Big Tech folks with their own immoral agendas. They are intrinsically "defective by design"
Yes this does include open source and locally run models and tools.
(If you think I am mistaken please provide an example of contemporary GenAI products that to not have essential ties to toxic big tech)
Yes, it is possible to "seize the means of production" as you describe in the general sense, but if those means of production are intrinsically defective by design...incapable of producing at acceptable levels of quality and efficiency and incapable of providing agency to creators...then it should not be adopted.
@msh What I am saying is that we can and have done better than what has already been done.
Also, if we’re looking at the sets of training data that has already been included in these models that already exist, I posit this thought:
What were they ingesting?
Why aren’t we allowed to have a say in what is being done here?
We can untether what we have from the processes that made it.
A warehouse could house immigrants for detention or life saving aid for low income families.
@majorlinux
AI has its usages. Science has been using AI for decades, long before the LLM craze. I use an AI model to denoise and remove the background gradient from my astrophotography images.
But that's not what this is about. This is about big tech forcing AI into everything and onto everyone. This is about companies and people getting rich on stolen data. This is about LLM agents blackmailing people for not accepting their code into open source repositories.
Nobody gives a shit about your little hobby project and the tools you use to summarise your notes. If it works for you, good.
My team lead uses copilot to summarise meetings and the summaries are wrong and full of mistakes every single time.
My team lead uses genAI to create marketing images where people have six fingers on one hand, where all the faces are the same, where text is fucked up and illegible. My team lead is wasting resources on something that has zero benefit, instead of just hiring someone with the actual skills to do the job.
I'll take your word for it, won't bother to click. But there is exactly one way I could imagine that headline working out without being slopaganda... and that's to argue that a coming backlash against AI and big tech will open a window where we can get people to care about technological autonomy. If that's not what he's arguing then... he's on my shit list.
@cwebber I am seeing more of this lately with the themes of "but watabout Locally Run Open Models" and "I'm TOTALLY not the ass of the Reverse Centaur REALLY" and it makes me kinda sad because I have cone to the realisation that #GenAI is still a misapplication of #LLM technology that rots your brain no matter how you slice it.
Dude misses the days of Web 1.0 when "view source" was actually useful and informative then advocates for the addition of another layer on the Poop Parfait that is the Modern Web to separate us further from The Source?
No. NO. How can someone so respectable with such a deep history with the WWW miss the point SO BADLY? The solution is to make The Source easy again, not to add Agentic AI on top of front end frameworks that talk to back end frameworks running in containers on VMs hosted in a cloud of corporately owned computers!
You CAN still write simple, easy and useful web pages with a subset of modern HTML, CSS and JavaScript. The solution is #SmallWeb not this garbage!
@puzzled @cwebber it has already been observed that when two LLM agents interact with each other long enough they (d)evolve into using a sort of machine code style jibber-jabber, even when they do manage to stick to their intended tasks, perhaps because it is more "efficient".
So yeah in some possible hellish future timeline we could end up with software being developed by a herd of pet LLM agents yammering amongst each other like a bunch of Furbies to create jibberish for another set of LLM agents to render and execute apps but don't worry we'll only need a few dozen gigs of RAM and a couple hundred GPU cores for our everyday tasks this is FINE it'll be OK.