This is the EXACT kind of shit we have been warning you about. "A.I." tools trained on data filled with asaumptions and prejudices about marginalized people and then being deployed in situations with literal life-&-death implications for the people involved. In this case, diaabled parents being more likely to be flagged as "unfit."

This is utter nightmare shit.

https://apnews.com/article/child-protective-services-algorithms-artificial-intelligence-disability-f5af28001b20a15c4213e36144742f11

Not magic: Opaque AI tool may flag parents with disabilities

PITTSBURGH (AP) — For the two weeks that the Hackneys’ baby girl lay in a Pittsburgh hospital bed weak from dehydration, her parents rarely left her side, sometimes sleeping on the fold-out sofa in the room.

Associated Press

@Wolven someone taking my kid away is nightmare fuel. I just can’t even start to think about what I would do—I care about them more than anyone in the world, and I freak out even just thinking about the scenario.

I can’t begin to fathom what these parents have gone through.

@Wolven

"They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out..."

so Allegheny has their own "pre-cogs" ala "Minority Report"?

@mamund prdefoctive risk assessments have been used is CPS social worl for a long time; just now they let the automated algorithms run on their own
@Wolven as long as it cuts labor costs, i guess. paying people to evaluate child welfare is just too onerous for us, so we'll let a computer decide what families remain intact. jesus fucking christ.
@Wolven I can’t comment on the legality of this scheme, however a very recent Royal Commission in Australia just examined a scheme that targeted welfare recipients. It was given the name Robodebt. It was found to be partly illegal and grossly unfair.
@Wolven this is where I live and I had no idea this was in play here but now I understand why so many CYS cases that should’ve been shut down from the jump were allowed to drag on for months and years. I’m sick. Absolutely sick.

@Wolven

Not even a half shuffle step from there to straight up eugenics.

@Wolven Oh, fantastic, the mother in question has ADHD, like myself, one of the developers is here in NZ, and

"The developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, also asked them to do preliminary work."

So good to know I'm going to be declared unfit any day now by a machine.

@Wolven (My son is autistic and highly sensory-seeking, he's basically a miniature Johnny Knoxville, and we've already had one hospital visit with a suspected concussion, so yeah, I think I have grounds to be worried, lmao, fuck...)
@Wolven no one talks about rule-based systems anymore, but they had a defined set of rules that could be read and, if necessary, argued about. I used to build loan approval systems with them. No racial or disability bias allowed, those rules just weren't coded in. They would be screaming obvious if they were there. This case should have been handled by agreed rules automated or not.
@Wolven ok writers. Here’s a novel or screenplay idea for you. Along the lines of #NeverLetMeGo by Ishiguro.

@Wolven
> real-world laboratory for testing AI-driven child welfare tools

Utterly nightmarish indeed.

@Wolven @dweinberger is likely to have a useful take an this subject.

This article, "Alien Knowledge", from 2017 is a great intro to the dynamic and issues.

https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/

In a nutshell (my hot take) - how do we treat sources of answers that are useful, but are arrived at in ways we can't (ever) practically check and how do we insulate against pernicious bias, ulterior or inadvertent?