NEW: Every year, governments use algorithms to flag people receiving welfare benefits as "high risk" of committing fraud. Today, for the first time, a joint investigation by Lighthouse Reports and WIRED can reveal how one of these algorithms works. We obtained the full algorithm code and the training data and recreated the system. What we found was discrimination based on gender and ethnicity. Part 1 is here: https://www.wired.com/story/welfare-state-algorithms/
Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
In part 2 of the investigation, we explore the human toll these fraud investigations have on the people who are subjected to them: https://www.wired.com/story/welfare-algorithms-discrimination/
We'll be publishing parts 3 and 4 tomorrow. You can read the full methodology here: https://www.lighthousereports.com/suspicion-machines-methodology/
Suspicion Machines Methodology

A detailed explainer on what we did and how we did it

Lighthouse Reports

Also, you can try the algorithm out here: https://rotterdam.lav.io/

Lighthouse Reports began this investigation 2 years ago, and we at WIRED have been working on it for the past 6 months. It included reporting in 12 countries, hundreds of public records requests, international travel, countless hours of design and engineering, and much more. Personally, I don't think I've ever worked harder on something in my life. I'm beyond proud of the whole team, who worked tirelessly to make this happen.

Rotterdam Risk Scores

In part 3 of our Lighthouse Reports x WIRED investigation, we dove into the political forces that led to the creation of the Danish Publish Benefits Administration's data mining unit, which transformed Denmark's famed welfare state into a apparatus of mass surveillance: https://www.wired.com/story/algorithms-welfare-state-politics/
Finally, in part 4, we explored businesses that sell risk-scoring algorithms to governments around the world, all while forcing secrecy around the technology on the grounds of intellectual property: https://www.wired.com/story/welfare-fraud-industry/
@couts excellent article! Interesting that the purpose of the data collection is to discover fraud, not all those who would be entitled to benefits but aren’t able to access them or not getting enough. Also that this massive bureaucracy would be spared with universal basic income… and that the cost of poor mental health should be measured of those falsely accused being constantly seen as a suspect when already vulnerable