NEW: Every year, governments use algorithms to flag people receiving welfare benefits as "high risk" of committing fraud. Today, for the first time, a joint investigation by Lighthouse Reports and WIRED can reveal how one of these algorithms works. We obtained the full algorithm code and the training data and recreated the system. What we found was discrimination based on gender and ethnicity. Part 1 is here: https://www.wired.com/story/welfare-state-algorithms/
Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
@couts @noyes great article! I always felt the most significant aspect of GDPR was the right to human review of algorithmic decision making. But wondering if there’s another aspect here where we start to uncover systemic bias from historical human decision training data, which had always been there yet hidden. That would mean, counterintuitively, that we may want more and better algorithmic decision making to eliminate that bias (shades of Judge Dredd I know).