If you work in AI you should have an idea of what you would like the future to look like. Otherwise you are just haphazardly creating power multipliers for surveillance capabilities and destroying the economic viability of human labor.
@nsaphra Or, you know, you could just be creating cool software for the sake of learning, advancing a sub-field, and it being cool.
@paddy @nsaphra Or you could be a tool. [shortened that for you]
@dalias @nsaphra Meh, I think most of us are tools to some extent. I don’t think many individual AI researchers have the forethought or influence to make any difference to the direction of the field. This just shifts the blame to developers instead of the actual decision makers
@paddy @nsaphra I'm not blaming them for not having influence. I'm blaming them for the mindset that the tech being "cool", "intellectually stimulating", "prestigious", or whatever is a good reason to go into a field whose only applicability is enabling those who control it to commit harms.

@dalias @nsaphra I do agree with that, but I think it is a stretch to say that AI research/work is committing harm or even has a high likelihood of committing harm.

We’re not talking about AI to guide drone strikes here, we’re talking about AI that reduces the need for human labour. And that hasn’t even happened in a significant way yet.

This is all very hypothetical based upon the chance that AI might indirectly cause harm by displacing workers in the future. Am I missing something here?

@paddy @nsaphra Nothing in LLMs reduces the need for human labor. The only labor it replaces is labor that was already being applied to do harmful things like content farms, which isn't a *need* but some scammer's business interest.