If you work in AI you should have an idea of what you would like the future to look like. Otherwise you are just haphazardly creating power multipliers for surveillance capabilities and destroying the economic viability of human labor.
@nsaphra I am confident that most AI researchers do and that it is why ethics is such a huge subject.
@nsaphra [breathlessly, credulously] bbbut if the good guys don’t destroy everything responsibly, the bad guys will destroy everything irresponsibly!!1
@nsaphra Or, you know, you could just be creating cool software for the sake of learning, advancing a sub-field, and it being cool.
@paddy @nsaphra Or you could be a tool. [shortened that for you]
@dalias @nsaphra Meh, I think most of us are tools to some extent. I don’t think many individual AI researchers have the forethought or influence to make any difference to the direction of the field. This just shifts the blame to developers instead of the actual decision makers
@paddy @nsaphra I'm not blaming them for not having influence. I'm blaming them for the mindset that the tech being "cool", "intellectually stimulating", "prestigious", or whatever is a good reason to go into a field whose only applicability is enabling those who control it to commit harms.

@dalias @nsaphra I do agree with that, but I think it is a stretch to say that AI research/work is committing harm or even has a high likelihood of committing harm.

We’re not talking about AI to guide drone strikes here, we’re talking about AI that reduces the need for human labour. And that hasn’t even happened in a significant way yet.

This is all very hypothetical based upon the chance that AI might indirectly cause harm by displacing workers in the future. Am I missing something here?

@paddy @nsaphra Nothing in LLMs reduces the need for human labor. The only labor it replaces is labor that was already being applied to do harmful things like content farms, which isn't a *need* but some scammer's business interest.
@paddy @nsaphra that’s great when you’re 14 and watching Star Trek, but an adult should have *some* self-reflection on the consequences of their actions
@chucker @nsaphra I just don’t think the consequences of my actions, or the actions of many individual AI researchers, is very large.
@paddy @nsaphra that’s why people find common ground and unionize
@chucker @nsaphra Exactly! We can tackle it together, not via self-reflection over our own individual contributions.
@nsaphra @histoftech There are what, single digits of people in each major effort who can make this call, and actually bend the effort. The rest are just workers who can, at most, leave a dent.
@nsaphra the number of people who 'work in AI' is vanishingly small. The vast, vast majority of Google employees don't work in AI. You're probably talking about a few hundred people. Then there are tens, hundreds of thousands in hardware, software, admin that support the AI business. They don't have a say, they're just paying the mortgage.
@nsaphra there aren't enough retoot buttons in the world for this.

@nsaphra Google adopted the motto "don't be evil" a long time ago, but it's not helping noticeably. 😉

Seriously though, tech workers can have the best intentions in the world, but it matters little if their work is bought up by Facebook and used to do evil.

AI is like cars. Our society has an elaborate system of rules for how we manufacture, operate, and maintain cars and their supporting infrastructure. We need something similar for AI, but it will take time to work out.

@nsaphrayes!

I used to say the exact same thing about software. Or wait. I still say the same thing about software.

ML tech is software.

#MLsec is #swsec

@nsaphra I think its very hard to predict exactly what your model would be used for, especially if its open source. You could very easily use a 'summarizer' tool to read thru messages and label someone as terrorist for example.