NEW: Every year, governments use algorithms to flag people receiving welfare benefits as "high risk" of committing fraud. Today, for the first time, a joint investigation by Lighthouse Reports and WIRED can reveal how one of these algorithms works. We obtained the full algorithm code and the training data and recreated the system. What we found was discrimination based on gender and ethnicity. Part 1 is here: https://www.wired.com/story/welfare-state-algorithms/
Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
In part 2 of the investigation, we explore the human toll these fraud investigations have on the people who are subjected to them: https://www.wired.com/story/welfare-algorithms-discrimination/
We'll be publishing parts 3 and 4 tomorrow. You can read the full methodology here: https://www.lighthousereports.com/suspicion-machines-methodology/
Suspicion Machines Methodology

A detailed explainer on what we did and how we did it

Lighthouse Reports

Also, you can try the algorithm out here: https://rotterdam.lav.io/

Lighthouse Reports began this investigation 2 years ago, and we at WIRED have been working on it for the past 6 months. It included reporting in 12 countries, hundreds of public records requests, international travel, countless hours of design and engineering, and much more. Personally, I don't think I've ever worked harder on something in my life. I'm beyond proud of the whole team, who worked tirelessly to make this happen.

Rotterdam Risk Scores

In part 3 of our Lighthouse Reports x WIRED investigation, we dove into the political forces that led to the creation of the Danish Publish Benefits Administration's data mining unit, which transformed Denmark's famed welfare state into a apparatus of mass surveillance: https://www.wired.com/story/algorithms-welfare-state-politics/
Finally, in part 4, we explored businesses that sell risk-scoring algorithms to governments around the world, all while forcing secrecy around the technology on the grounds of intellectual property: https://www.wired.com/story/welfare-fraud-industry/
@couts excellent article! Interesting that the purpose of the data collection is to discover fraud, not all those who would be entitled to benefits but aren’t able to access them or not getting enough. Also that this massive bureaucracy would be spared with universal basic income… and that the cost of poor mental health should be measured of those falsely accused being constantly seen as a suspect when already vulnerable

@couts

If only the same efforts were placed on billionaire tax evasion, corporate financial fraud, and oil oligarchs money laundering.

The poor are the targets of algorithmic inequity.

@couts I'm too sad to read the investigation but thank you for carrying it out ❤️
@Loukas It's definitely a tough one—thanks for reading!

@couts The amount they spend detecting and eliminating fraud is likely more than the real fraud.

In any case the big fraud is not unemployed or underemployed people making too much money. It is big corporations paying less than a living wage because they know the welfare system will pick up the slack.

@couts

Wow super-cool piece covering the enraging workings of software that makes decisions about people's real lives. The algorithm discriminates but in opaque ways so it's hard to find recourse against this "suspicion machine".

It is an example of what Cathy O'Neil calls "Weapons of Math Destruction".

https://www.wired.com/story/welfare-state-algorithms/

Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
@CelloMomOnCars @couts "Weapons of Math Destruction" is perfect! #AI #AIhype

@erchanda @couts

I'm a big fan of this book. O'Neil makes the concepts around the misuse of Big Data accessible; she's funny, too, and makes you laugh even as you're boiling inside. Worth a read, imo.

Because algorithms are used everywhere, from teacher evaluations to policing to college admissions, on and on.

@couts What about GOP House members who got PPP and small business loans they don't have to repay, and don't really have proper justification for?
@couts Maybe they should also use algorithms to identify those individuals highly likely to indulge in tax evasion?
@couts I click the link, but can't find the story ... there's a description, and a lot of links to other stories. There is a pop up asking me to buy a membership, but not telling me membership is required to read the story ... am I just missing the obvious, and the reason I can't find the story is because I don't pay for Wired?
@missladyartemis Sorry that's happening! I'm not sure what the deal is, but it may be the paywall. Thanks for trying to read at least!
@couts hey, good journalism needs funding, I appreciate you sharing the news!
@couts remarkable. How are they able to get data collected from citizens ? If they know - they will pollute it.
@couts
Looks very important but hit the paywall. Maybe @ThomHartmann or @democracynow can report on this for us.
#Racism #technology #antiracism #coding
@couts Never seen anything quite like this before. Agreed, important work.
@couts Funny. I didn’t think people on benefits were noted for NFTs, pyramid schemes, insider trading, … As usual, the rich crooks use misdirection.
@couts this is mind blowing… as usual poor people are the target of inequality. Thanks for the report ❤️
@couts thats really exausting. The Norwegian central bank runs a policy on 4.5%uninployment rate always. For preventing inflation. But what the average citizen dont realise.. is that that policy is used as a ‘’war’’ on ressursfull indeviduals. The poleticians are basicly putting millions into preventing smal bussenises to grow. And it is painful and unessesary! I bet They use the same algorytem here.. it feels like we are becoming the 52state🙈🤷‍♂️afterall
@couts I was audited once years ago. They discovered that I bought food to eat. 🤦‍♂️
@couts we had the same here in The Netherlands with SyRi. The algorithm basically declared everyone of color, that needed benefits a high fraud risk. And the government workers trusted its conclusions without doubt.
It had to be banned by the court before our governments stopped using it. The cost efficiency and ease of use was more important than the rights of their citizens.
@couts This is outstanding, Andrew: you present the findings in a very clear, straightforward way that makes it easy to understand just how limited, opaque and unfair these algorithms can be. Very much looking forward to the rest of the series
@ebishirl Thanks so much for your kind words and for reading. The whole series is now live. You can find all the stories here: https://www.wired.com/tag/series-suspicion-machine/
Series Suspicion Machine | Latest News, Photos & Videos

Find the latest Series Suspicion Machine news from WIRED. See related science and technology articles, photos, slideshows and videos.

WIRED

@couts @davidoclubb And today, there is a Welsh Senedd meeting where biometric data collection in schools is being talked about.

About 15:25 at http://www.senedd.tv/Meeting/Index/7b473252-8c64-43d1-bca5-b00bae918b5a

@couts This is definitely something I'm interested in
@couts This is unbelievable reporting. We need much more of this in the world. The question is: what policies will help civil society and government evaluate these sorts of algorithms in a remotely scaleable way?
@couts somebody once said that the problem is not that AI is smart and going to take over the world, but rather that AI is stupid and it already has
@couts @noyes great article! I always felt the most significant aspect of GDPR was the right to human review of algorithmic decision making. But wondering if there’s another aspect here where we start to uncover systemic bias from historical human decision training data, which had always been there yet hidden. That would mean, counterintuitively, that we may want more and better algorithmic decision making to eliminate that bias (shades of Judge Dredd I know).