If you work in AI you should have an idea of what you would like the future to look like. Otherwise you are just haphazardly creating power multipliers for surveillance capabilities and destroying the economic viability of human labor.
@dalias @nsaphra I do agree with that, but I think it is a stretch to say that AI research/work is committing harm or even has a high likelihood of committing harm.
We’re not talking about AI to guide drone strikes here, we’re talking about AI that reduces the need for human labour. And that hasn’t even happened in a significant way yet.
This is all very hypothetical based upon the chance that AI might indirectly cause harm by displacing workers in the future. Am I missing something here?