President Trump just signed an executive order that threatens to punish states that pass AI laws – from Colorado’s SB24‑205 to New York’s measures. The move pits the White House against governors like Gavin Newsom and raises alarm over algorithmic discrimination. What does this mean for AI governance? Dive in. #AI #ExecutiveOrder #AlgorithmicDiscrimination #ColoradoSB24205

🔗 https://aidailypost.com/news/trump-signs-executive-order-threatening-punish-states-that-pass-ai

Oh joy. I feel so much safer. /s

Border Patrol is monitoring US drivers and detaining those with 'suspicious' travel patterns
https://apnews.com/article/immigration-border-patrol-surveillance-drivers-ice-trump-9f5d05469ce8c629d6fecf32d32098cd

#Authoritarianism #reasonablegrounds #AlgorithmicDiscrimination

Border Patrol monitors US drivers and detains Americans for ‘suspicious’ travel

The U.S. Border Patrol is monitoring millions of American drivers nationwide in a secretive program to identify and detain people whose travel patterns it deems suspicious. The Associated Press has found that the predictive intelligence program has resulted in people being stopped, searched, and in some cases arrested. A network of cameras scans and records vehicle license plate information, and an algorithm flags vehicles deemed suspicious based on where they came from, where they were going, and which route they took. Federal agents in turn may then flag local law enforcement. The Border Patrol’s parent agency said they use license plate readers to help identify threats and disrupt criminal networks and are governed by "federal law and constitutional protections.”

AP News

AI could transform women’s health – or reinforce its inequalities. To prevent AI from delaying care instead of improving it, we need more diverse data, stronger research, and clear rules for the use of algorithms.

👉 Read the whole article: https://algorithmwatch.org/en/when-will-ai-improve-womens-health/

With our reporting form, we're gathering information on cases of #AlgorithmicDiscrimination. Every report counts: https://algorithmwatch.org/report-algorithmic-discrimination/

As part of the EU Horizon Project #FINDHR, @algorithmwatch_ch has been working with an European consortium from academia, industry, and civil society to develop solutions to counter #AlgorithmicDiscrimination in hiring.

💢 This interdisciplinary approach is essential. Algorithmic discrimination does not arise solely at the technical level, nor can it be solved there alone. The social, cultural and political context in which a system is developed and used must also be taken into account.

As housing gets more competitive, landlords and governments are outsourcing critical decisions to automated systems. But these tools often replicate old biases. Just faster and at scale.

In the US, SafeRent’s AI tool for tenant screening gave consistently lower scores to Black and Hispanic renters, and to people using housing vouchers - a legal form of income assistance. This is what we would call #AlgorithmicDiscrimination.

https://racismandtechnology.center/2025/01/14/racist-technology-in-action-ai-tenant-screening-fails-the-fairness-test/

Report here: https://algorithmwatch.org/en/report-algorithmic-discrimination/

Racist Technology in Action: AI tenant screening fails the 'fairness' test

SafeRent Solutions, an AI-powered tenant screening company, settled a lawsuit alleging that its algorithm disproportionately discriminated against Black and Hispanic renters and those relying on housing vouchers. Tenant Mary Louis filed the suit after being rejected housing through SafeRent.

Racism and Technology Center

"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."

https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509

#EU #AI #AIAct #GDPR #DataProtection #AlgorithmicDiscrimination #AlgorithmicBias #Privacy

Algorithmic discrimination under the AI Act and the GDPR | Think Tank | European Parliament

Algorithmic discrimination under the AI Act and the GDPR

"In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.

Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.

Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."

https://www.lighthousereports.com/investigation/swedens-suspicion-machine/?utm_source=pocket_shared

#Sweden #SocialInsurance #ChildSupport #Algorithms #AlgorithmicDiscrimination #AlgorithmicBias

Sweden’s Suspicion Machine

Behind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented

Lighthouse Reports

this "less personalized ads" feature from Meta is asinine -- they're still allowing "personalization" based on location, age and gender, even though 2 of the big practical reasons to turn "personalized" advertising _off_ are to avoid #elderFraud and #algorithmicDiscrimination

https://techcrunch.com/2024/11/12/europes-dma-forces-meta-towards-less-personalized-ads/

Europe's DMA forces Meta towards 'less personalized ads' | TechCrunch

Meta, under legal pressure in the European Union over a binary 'pay us or consent to ad tracking' choice it currently offers regional users of its social

TechCrunch

#FRANCE #CNAF #Algorithms #RiskScoring #AlgorithmicDiscrimination: "Fifteen French NGOs are suing the public body that distributes allowances for families, youth, housing, and inclusion (CNAF) at the French state council over the use of a risk-scoring algorithm, which impacts almost half of France's population, according to a Wednesday (16 October) press release.

This legal action follows the Court of Justice of the EU (CJEU) ruling that decision-making using scoring algorithms that use personal data is unlawful under the EU's data privacy regulation (GDPR).

The NGOs are calling on the state council to refer the case to the CJEU for a preliminary ruling. The case could take two to five years, depending on how the reference is handled.

"This algorithm mathematically reflects the discriminations already present in our society. It is neither neutral nor objective," said Marion Ogier, a lawyer at the Human Rights League, at a press conference in Paris on Wednesday.

Since 2010, the CNAF has been using an algorithm to select recipients for a review of their benefits. These credit checks are focused on cases deemed as 'higher risk' based on the recipient's profile and situation.

However, a number of local investigations published in December 2023 criticised these checks for not being truly random. Seventy per cent of 128,000 credit checks conducted in 2021 came from scoring algorithms, revealed CNAF in a 2022 report.

"The CNAF algorithm is just one part of the system. The public pension schemes, health insurance, and employment service all use similar algorithms,” Ogier added."

https://www.euractiv.com/section/tech/news/french-ngos-sue-public-body-over-scoring-algorithm/

French NGOs sue public body over scoring algorithm

"These risk-scoring systems could be considered as presenting an unacceptable risk under the Artificial Intell

EURACTIV

https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom/
…history will show that this was the moment when we had the opportunity to lay the groundwork for the future of #AI

A future where AI is used to advance #humanrights and human dignity, where privacy is protected…where we make our democracies stronger and our world safer…
…to help make sure that the benefits of AI are shared equitably and to address predictable threats, including #deepfakes, #dataprivacy violations, and #algorithmicdiscrimination

Remarks by Vice President Harris on the Future of Artificial Intelligence | London, United Kingdom | The White House

U.S. EmbassyLondon, United Kingdom 1:43 P.M. GMTTHE VICE PRESIDENT:  Hello, everyone.  Good afternoon.  Good afternoon, everyone. 

The White House