@dalias @nsaphra I do agree with that, but I think it is a stretch to say that AI research/work is committing harm or even has a high likelihood of committing harm.
We’re not talking about AI to guide drone strikes here, we’re talking about AI that reduces the need for human labour. And that hasn’t even happened in a significant way yet.
This is all very hypothetical based upon the chance that AI might indirectly cause harm by displacing workers in the future. Am I missing something here?
@nsaphra Google adopted the motto "don't be evil" a long time ago, but it's not helping noticeably. 😉
Seriously though, tech workers can have the best intentions in the world, but it matters little if their work is bought up by Facebook and used to do evil.
AI is like cars. Our society has an elaborate system of rules for how we manufacture, operate, and maintain cars and their supporting infrastructure. We need something similar for AI, but it will take time to work out.