NEW: Every year, governments use algorithms to flag people receiving welfare benefits as "high risk" of committing fraud. Today, for the first time, a joint investigation by Lighthouse Reports and WIRED can reveal how one of these algorithms works. We obtained the full algorithm code and the training data and recreated the system. What we found was discrimination based on gender and ethnicity. Part 1 is here: https://www.wired.com/story/welfare-state-algorithms/
Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
In part 2 of the investigation, we explore the human toll these fraud investigations have on the people who are subjected to them: https://www.wired.com/story/welfare-algorithms-discrimination/
We'll be publishing parts 3 and 4 tomorrow. You can read the full methodology here: https://www.lighthousereports.com/suspicion-machines-methodology/
Suspicion Machines Methodology

A detailed explainer on what we did and how we did it

Lighthouse Reports

Also, you can try the algorithm out here: https://rotterdam.lav.io/

Lighthouse Reports began this investigation 2 years ago, and we at WIRED have been working on it for the past 6 months. It included reporting in 12 countries, hundreds of public records requests, international travel, countless hours of design and engineering, and much more. Personally, I don't think I've ever worked harder on something in my life. I'm beyond proud of the whole team, who worked tirelessly to make this happen.

Rotterdam Risk Scores

@couts amazing work!
@cliffclavin thank you!