@wilbowma fascinating read; I'm curious if your university ever assigns you to teach the accreditation-required tech ethics course, I think the students could benefit from your perspective.
Do you think your framework applies differently to building the tools than using them? On the one hand, the "reasonable expectation based on reasonable knowledge" test is another level abstracted when the question is "will the likely users of the tool I'm building use it to cause harm?" (or, perhaps more in your framework, "is the availability of this tool likely to cause harm?"). As an example, I think it's morally wrong to build missiles for Northrup, even if you're not the one firing them, because "people die" is the whole point of the tool; I'm not sure how to define the category, but I think the devs at xAI building the Grok porn machine are arguably working in the same ethical category. It's maybe clearer in the AI industry power argument -- it's still human labour and expertise building these tools, and if it's *your* labour and expertise than you're part of the AI industry power machine. (Full disclosure, I spent two years working on an AI code-generation tool; I'm not proud of the work, and eventually "I am personally contributing to the devaluation of labour in my own industry" was a prominent part of why I quit.)