Communique for #BlackMastodon and Black folk only:

(White folk can listen too if they want, but this conversation is not for them).

The people telling you to be very afraid of Artificial General Intelligence don't know what they're talking about. Remember, their last big predictions were:
* Monkey jpegs are now money. Buy crypto.
* Elon will be great for Twitter
* Listening to VCs talk on Clubhouse is the next big social network
* Adam Neuman is a genius, and we should give him more money

These people are very very rich, not very very smart. The track record of their judgment speaks for itself.

There are very real, very serious, risks from Machine Learning in general, but they are not the risks that these delusional dudes are talking about.

The real risks are not "coming in the near future." They are here with us today, and they affect systems that impact marginalized communities the most.

Most of the experts on the real risks are from marginalized communities.

All this talk of artificial general intelligence is a head fake to draw attention from the massive and real harms that can be caused by ML systems today.

Today's systems can:
* Issue a warrant for your arrest for a crime that you didn't even do based on facial recognition
* Decide that you are a pre-trial flight risk and deny you pre-trial release
* Give a false diagnosis at a hospital, deciding that you are not worth putting on life support
* Tell a car to run you over as you cross the street

@mekkaokereke Hard disagree! Today's systems are insanely dangerous and should be regulated and called out, no doubt about that.

Being concerned about AGI *as well* is not a head-fake. We can be concerned about more than one thing.

Just like being concerned about the issues with current-day ML is not a head-fake to draw attention from climate change: both issues are real and deserve attention.

@moshez @mekkaokereke Rob Miles has some really interesting content wrt. why it's important to take AGI alignment issues seriously.

That being said, the techbro hivemind has absolutely tried to shift the discussion over to _only_ discussing the ramifications of technology that doesn't exist yet. The misuse of existing AI tech is already happening, at scale, right now, and it will only get worse as we climb the exponential curve.

Meanwhile, Microsoft just laid off an AI ethics team.

Go figure.

@duk @mekkaokereke If it's any consolation, MS also does pretty badly on long-term AI Safety, so if nothing else, they're consistent in putting profits over people.

Which is kind of my point. This isn't an either-or situation, just like "stop structural racism" and "stop structural sexism" aren't either-or, and anyone who tells you differently is probably a fascist.

We can care about both and believe both are problems we absolutely should solve.