Am I the only one who thinks its odd that #ai companies all seem to have fired their Ethics / Responsibility folks at the same time - and just as greater scrutiny comes to these technologies?

Probably nothing - just a coincidence that they all decided at the same time to get rid of the people who might ... feel a sense of public duty to whistleblow if they saw evidence of serious wrongdoing.

@kittylyst I think it's entirely due to the rise of the alignment lens as an alternative to ethics. Why keep a bunch of trouble making women and minorities who don't want to build killer drones, when you could have a bunch of white dudes who want to ALIGN the killer drones?
@nsaphra @kittylyst That's interesting. Simultaneously, companies have dropped the "value" from "value alignment". Alignment seems to mean "shares our values" (but whose?) when they want it to. But mostly it just means "responds to prompts like they are instructions instead of prefixes" because the one value everyone can agree upon is getting the answer we were hoping to get.
@Riedl @nsaphra @kittylyst Yeah, we are already seeing that GPT-4 is *worse* than GPT-3 at some of our own tasks on getting these models to understand less-likely scenarios. It struggles to explain why a person might make suboptimal choices, even refusing to do our task (asking for an explanation) and instead just calling the person "irrational". Not that I'm surprised (at all), but nothing about their "alignment" calls for empathy.