Where are the nuanced left-wing takes on modern AI and LLMs?

So much of the discourse around this tech is centered on rejecting it because of who currently owns it. But like all tech, it can be used for both oppression and liberation.

Who is focusing on the latter?

@zanzi i might be wrong but I believe that the main reason why such takes don't take much place in the mainstream discussions is that many people in that camp, or adjacent to that camp, are riding the hell out of the horse of the argument that this technology, at the stage it currently is, is not that good yet and cannot be relied on to actually do stuff. At least, that's what I could make out of it. It does seem a bit ungenerous and biased.
This and also the argument about water voracity.

@D3Reo Yes, I think this gets to the heart of the issue, and this is also one of the reasons why I stayed away from this discussion so far.

I think the framing itself is flawed. By arguing that these models are bad because they're not useful, we implicitly accept the framing of the tech CEOs for whom the sole metric by which we should judge this research is by how well it replaces human labor. But there's a lot of reasons why something could be worth studying. For instance, I used to love markov models as a kid, despite them not being particularly useful at modelling language at all.

That being said, I don't think anyone owes it to any tech to be 'generous'. It's clear that it's useful for *some* tasks, but still bad at others. If it doesn't scratch someone's itch, it's fair for them to say so. And I personally find that it's much more interesting to figure out *why* it's good or bad at something, rather than tell someone that they're wrong about their own experiences with how they use these tools.

The water argument *is* an issue, though. I think that's one of the key reasons to step away from frontier models and focus on developing smaller local models instead.