Communique for #BlackMastodon and Black folk only:

(White folk can listen too if they want, but this conversation is not for them).

The people telling you to be very afraid of Artificial General Intelligence don't know what they're talking about. Remember, their last big predictions were:
* Monkey jpegs are now money. Buy crypto.
* Elon will be great for Twitter
* Listening to VCs talk on Clubhouse is the next big social network
* Adam Neuman is a genius, and we should give him more money

These people are very very rich, not very very smart. The track record of their judgment speaks for itself.

There are very real, very serious, risks from Machine Learning in general, but they are not the risks that these delusional dudes are talking about.

The real risks are not "coming in the near future." They are here with us today, and they affect systems that impact marginalized communities the most.

Most of the experts on the real risks are from marginalized communities.

All this talk of artificial general intelligence is a head fake to draw attention from the massive and real harms that can be caused by ML systems today.

Today's systems can:
* Issue a warrant for your arrest for a crime that you didn't even do based on facial recognition
* Decide that you are a pre-trial flight risk and deny you pre-trial release
* Give a false diagnosis at a hospital, deciding that you are not worth putting on life support
* Tell a car to run you over as you cross the street

Start with

@timnitGebru @abebab
@alex
@emilymbender

And work out from there to people that they collaborate with and recommend. 👍🏿

Don't take your AI Ethics and societal impact advice from VCs that hang out in Nazi chatrooms. These people aren't that smart, but most importantly, they don't care about you. At all.

A few years ago, I said don't get your financial advice from VCs that hang out in Nazi chatrooms. Some of y'all listened. Others lost their shirts on crypto.

Yes, they have sharp criticism of big tech companies that work in this space. That's a feature not a bug.

If you ever have the immense privilege to work on something that impacts all of us, positively or negatively, expect criticism.

I'm not Miles Dyson. I'm not going to accidentally create T2. That's not a real risk.

But I might accidentally build a system that doesn't work for Black folk. Your city might turn over law enforcement decisions to a system incapable of making unbiased decisions.

Being afraid of made up ML risks, and not being afraid of very real, very present ML risks, is kinda like dogs being terribly afraid of vacuum cleaners and fireworks, while being completely unafraid of mountain lions and rattlesnakes. 🤷🏿‍♂️
@mekkaokereke Yup, bunch of blowhards got very good at spouting confident-sounding lines without understanding the meaning or value of anything, and whilst being ignorant of their own biases. Now they suddenly feel threatened when they see a statistical model do exactly the same, and presume the worst.
@mekkaokereke And let us not forget about the dangers shown for over 100 years of the original AI: Corporations
@mekkaokereke racism with plausible deniability is a feature to these people, not an issue

@mekkaokereke Law enforcement decisions are ALREADY made by a system incapable of making unbiased decisions, but some of that system is made of humans you can talk to who MIGHT change their minds.

Obscuring biases with bogus AI is just more layers of BS smeared on systems that have already proven extremely hard to hold accountable in any way.

@mekkaokereke Refuse health insurance to our descendants due to genetic markers of potential diseases
@mekkaokereke I'll never forget that video from that computer in CompUSA where the webcam had that new facial tracking feature and a white lady and her black coworker demonstrated that it just didn't detect his face at all as a face. That will forever be the problem of all AI. It's only as smart as the data you give it. I promise you all these AI drawing programs if I asked them to draw a cyberpunk man looking off into the distance will always draw a white dude unless I specify.
@mekkaokereke
I saw something on TV that said facial recognition works less well on those who are not white which would increase the risk of people of colour and black people being arrested for something they haven't done. That's one of my worries about facial recognition being used by the police and its not like the police have a great track record for their dealings with members of ethnic minorities anyway even without that.
@mekkaokereke
Another worry is for victims of domestic abuse by policemen who have escaped their abuser. Facial recognition could be used by them & their buddies to track down their victims.

@AutisticMumTo3 @mekkaokereke I found a classic video of it. Here's a video from when facial recog was new. It was clear that the data used to train the software didn't have enough black people. This sort of design goes back to

https://youtu.be/t4DT3tQqgRM

This goes back to the ~~nature~~ origin of film itself and this shows up again and again and again with every technology. We're literally afterthoughts. Make camera, make it beautiful, then make it work for black people.

HP computers are racist

YouTube
@wolfkin @AutisticMumTo3 @mekkaokereke white person popping in to drop a couple of links on how it's not the nature of film but white people's decisions about how film should perform
- the Shirley card https://www.vox.com/2015/9/18/9348821/photography-race-bias
- the exposure range Polaroid picked
https://www.theguardian.com/artanddesign/2013/jan/25/racism-colour-photography-exhibition
Color film was built for white people. Here’s what it did to dark skin.

The biased film was fixed in the 1990s, so why do so many photos still distort darker skin?

Vox

@marypcbuk @wolfkin @AutisticMumTo3

(Unapologetic self-promo)

I'm really proud of the work that the Inclusive Imagery group at Google did on this front.

https://m.youtube.com/watch?v=2DXY9cR7vN4

We don't have to accept that photography products just don't work as well for darker skin. That's a choice. And given the skin tones of most of the world's 8 billion people, not a very good choice.

We can make the other choice, to build more inclusive imagery products that work for everyone.

Building a More Equitable Camera | Google

YouTube
@marypcbuk @AutisticMumTo3 @mekkaokereke touché. I did misspeak with that phrase. I had meant to say the origin of film not it's inherent nature.

@marypcbuk @wolfkin @AutisticMumTo3 @mekkaokereke I first realized that it's a choice as an anglo getting film developed in south India - there, they default to darker skin (as makes sense), so I looked radioactive in most of the photos. The rolls I developed in the US, my Tamil friends were extra dark and indistinct.

From what I hear, modern digital photography has made great strides on working for everyone.

mekka okereke :verified: (@[email protected])

@marypcbuk @[email protected] @[email protected] (Unapologetic self-promo) I'm really proud of the work that the Inclusive Imagery group at Google did on this front. https://m.youtube.com/watch?v=2DXY9cR7vN4 We don't have to accept that photography products just don't work as well for darker skin. That's a choice. And given the skin tones of most of the world's 8 billion people, not a very good choice. We can make the other choice, to build more inclusive imagery products that work for everyone.

Hachyderm.io
@Dangandblast @wolfkin @AutisticMumTo3 @mekkaokereke only because some people got out and pushed digital photography (which is just software) to work better for a wider range of skin tones; it's not like technology is automatically neutral (or neutral in any way, actually)
@marypcbuk @Dangandblast @wolfkin @mekkaokereke
One of the problems with facial recognition software is it is often trained primarily on white faces.
@marypcbuk @wolfkin @mekkaokereke
It should't have taken till a furniture manufacturer complained it was affecting photos of their products to get it sorted. I'm not surprised though.
@mekkaokereke my county (Allegheny County, Pennsylvania) is using an AI prediction model to determine which parents should be able to get their children back from our child protection services, and which ones are just never going to be competent to raise their own offspring.
@amaditalks @mekkaokereke yes, absolutely something that needs automation. There’s far too much love and caring in the system already
@amaditalks @mekkaokereke well that's horrifying and will end terribly for too many children.

@amaditalks @mekkaokereke
https://www.wired.com/story/algorithms-supposed-fix-bail-system-they-havent/

These algorithmic approaches were supposed to remove prejudice from the system, but instead have ended up encoding it into the system in a layer of opaque abstraction.

Algorithms Were Supposed to Fix the Bail System. They Haven't

A nonprofit group encouraged states to use mathematical formulas to try to eliminate racial inequities. Now it says the tools have no place in criminal justice.

WIRED
@amaditalks @mekkaokereke I just saw a trailer for a movie about a kid raised by parents with Downs. A flashback movie about how these people with downs formed a relationship and had a kid. I can't imagine these algos are gonna be very fair and considered to those communities either.
@wolfkin I only learned about this because of a recent news story about a couple who had their child seized when they brought her here from out of state for medical care. One of them is autistic, and IIRC the other has a different kind of developmental issue, and the predictive modeling said that they were not capable of adequately caring for the baby even though they brought her here because she had been under constant medical care.
@mekkaokereke Exactly this! I’m not impressed with anything Chat GPT is doing. It’s just data vomit.

@mekkaokereke

More generally, replacing human staff, who can understand mistakes and special situations and work around them, by an "AI" system that works fine in 99% of the cases but cannot do any of that when faced with the 1%.

(And if each of us has a 1% chance of getting screwed on any interaction with such systems, in a couple of years everyone will have got screwed...)

@mekkaokereke
All under the guise of "it's a machine so it's decisions are neutral"

@mekkaokereke Hard disagree! Today's systems are insanely dangerous and should be regulated and called out, no doubt about that.

Being concerned about AGI *as well* is not a head-fake. We can be concerned about more than one thing.

Just like being concerned about the issues with current-day ML is not a head-fake to draw attention from climate change: both issues are real and deserve attention.

@moshez @mekkaokereke Rob Miles has some really interesting content wrt. why it's important to take AGI alignment issues seriously.

That being said, the techbro hivemind has absolutely tried to shift the discussion over to _only_ discussing the ramifications of technology that doesn't exist yet. The misuse of existing AI tech is already happening, at scale, right now, and it will only get worse as we climb the exponential curve.

Meanwhile, Microsoft just laid off an AI ethics team.

Go figure.

@duk @mekkaokereke If it's any consolation, MS also does pretty badly on long-term AI Safety, so if nothing else, they're consistent in putting profits over people.

Which is kind of my point. This isn't an either-or situation, just like "stop structural racism" and "stop structural sexism" aren't either-or, and anyone who tells you differently is probably a fascist.

We can care about both and believe both are problems we absolutely should solve.

@mekkaokereke >>There are very real, very serious, risks from Machine Learning in general, but they are not the risks that these delusional dudes are talking about.<<

This was going to be my only comment when i saw that first post. That there are things to fear from AI but just not things like a cyberwar in 2029 with autonomous robotic soldiers ala Terminator 2. More like AI stealing all your art styles w/no attribution or compensation for one.