Am I the only one who thinks its odd that #ai companies all seem to have fired their Ethics / Responsibility folks at the same time - and just as greater scrutiny comes to these technologies?

Probably nothing - just a coincidence that they all decided at the same time to get rid of the people who might ... feel a sense of public duty to whistleblow if they saw evidence of serious wrongdoing.

@kittylyst Corporate social responsibility is an oxymoron.

@kittylyst I think it's less about "serious wrongdoing" (yet) and more about the ethics teams slowing down development by doing their jobs.

MSFT's team was re-orged to other teams ostensibly to speed things up. Then they were let go entirely.

I guess they didn't get the message that they were supposed to chill out and stop slowing things down, so they were cut.

@jzb @kittylyst fully this. Nothing concerning for the time being.

It's sad this sentiment came over from Twitter with the people migrating

@jzb @kittylyst OK, so here's the thing: an ethic team is a safeguard.

Imagine firing the safety inspector in your car factory just as you rank up your production speed with the argument that "he didn't get the memo about not standing in the way of progress".
Seriously…

@funkylab @kittylyst Imagine it? Happens all the time.

Please note, I'm not endorsing or excusing it. Just describing how I think things played out: They were reshuffled to be "closer" with the idea that would weaken the team (harder to feel empowered if you're the lone voice) and send the message "you're here as window dressing, don't slow things down."

@kittylyst They were reaching the conclusion AI is derivative work... That means AI is as strong as the data it trains on, so it's a matter of IP.

That's not how companies want to sell those interpretation machines, because it opens up to discussions of "why are you not paying for the data it learns from?"

So yes, firing ethics department makes a whole mess, but it was the derivative finding that hurts them the most.

It was not about the power of the technology but who gets to be paid.

@kittylyst they reached the conclusion internally. they were about to make it public. then, axed.(in google case)
@kittylyst "You wouldn't weaponize this, would you?" "Hell, naw! Never!" Forward to next day at the bureau building front door. "Let me into my office" "I'am afraid I can't do that, Dave."

@Caeled

@kittylyst I feel obliged to mention that HAL only became homicidal once the crew started talking about pulling its brain apart.

@jargoggles @kittylyst Yeah, as if AI devs never would mention that....
@kittylyst it’s very “If everyone else is doing drugs to win these competitions, then I might as well too!”

@kittylyst

I think the answer is simple: implementing what the ethics teams want is expensive, time-consuming, and cuts into profits.

They're racing to see who can come up with the best AI first, in hopes they can dominate the market.

Being the good guy doesn't pay as well 😢

@kittylyst I've had a little fun using #ai to create images lately. I realize, though, that people with no scruples can do terrible things with it—and might already be doing so.
@kittylyst The more I see of what we are doing with technology, the more I want to live alone on a remote island.
@kittylyst My guess is those teams might have already started to voice some uncomfortable concerns. Making them outlive their usefulness as corporate PR.
@kittylyst I think it's entirely due to the rise of the alignment lens as an alternative to ethics. Why keep a bunch of trouble making women and minorities who don't want to build killer drones, when you could have a bunch of white dudes who want to ALIGN the killer drones?
@nsaphra @kittylyst That's interesting. Simultaneously, companies have dropped the "value" from "value alignment". Alignment seems to mean "shares our values" (but whose?) when they want it to. But mostly it just means "responds to prompts like they are instructions instead of prefixes" because the one value everyone can agree upon is getting the answer we were hoping to get.
@Riedl @nsaphra @kittylyst Yeah, we are already seeing that GPT-4 is *worse* than GPT-3 at some of our own tasks on getting these models to understand less-likely scenarios. It struggles to explain why a person might make suboptimal choices, even refusing to do our task (asking for an explanation) and instead just calling the person "irrational". Not that I'm surprised (at all), but nothing about their "alignment" calls for empathy.

@kittylyst
It's not just you.
It is weird and telling about where their priorities lie.

The conspiracy nut in me wonders if their own revenue models, built on the same machine-learning, wasn't corrupted or tweaked in such a way to fire the only people who would stop it's inevitable rise to total power.

@kittylyst My guess, they asked their SALAMIs "how to improve buisiness?"
@kittylyst they are probably trying to find the most malicious ways to use AI as possible
@kittylyst It was probably the AIs that fired them
@kittylyst Those "ethics" teams were there for PR purposes anyway. Any advice they gave was already being ignored by management. Microsoft and OpenAI are not ethical, never have been, and never will be.
@kittylyst Not in the slightest you start with the people farthest from creating the product. Do they use layoffs as a plausible way to let people they don't want? You can prove that....