Predictive policing systems exacerbate racism and discrimination against people from lower socio-economic groups.

ORG supports Amnesty in calling for predictive policing systems to be BANNED.

Sign the petition to #StopAutomatedRacism TODAY ⬇️

#policing #police #precrime #predictivepolicing #codedbias #ukpolitics #ukpol

https://www.amnesty.org.uk/actions/ban-predictive-policing

Stop Automated Discrimination

We’ve found out that 3/4 of police forces across the UK are using technology to try to “predict crime” - and almost no one knows about it. But we’re trying to stop it. Sign the petition:

Amnesty's new report shows that the police are supercharging racism through predictive policing.

At least 33 UK police forces have used prediction or profiling tools.

“These systems are developed and operated using data from policing and the criminal legal system. That data reflects the structural and institutional racism and discrimination in policing and the criminal legal system.”

#policing #police #precrime #predictivepolicing #codedbias #ukpolitics #ukpol

https://www.theguardian.com/uk-news/2025/feb/19/uk-use-of-predictive-policing-is-racist-and-should-be-banned-says-amnesty

UK use of predictive policing is racist and should be banned, says Amnesty

Exclusive: rights group says use of algorithms and data reinforces discrimination in UK policing

The Guardian
Echte Emotionen. Gene­ra­tive KI und rechte Weltbilder

Die radikale Rechte liebt generative KI. Trump wie Musk teilen massenhaft KI-generierte Bilder auf ihren Plattformen, und auch die AfD hat schon lange den Nutzen von Midjourney & Co. für ihren Wahlkampf erkannt. Dabei zeigt sich: Die Technologie ist kein politisch neutrales Werkzeug. Als Nostalgiemaschine und Klischeeverstärker drängt sie sich für den Entwurf rechter Weltbilder geradezu auf.

Geschichte der Gegenwart

AI is rapidly expanding into all areas of public life.

Automated systems will be commonplace in making decisions that can entrench discrimination and inequality.

Join us in pushing back against this dangerous threat to your rights in the UK by signing our petition ⬇️

#DUABill #DataBill #dataprotection #AI #gdpr #privacy #codedbias #ukpolitics

https://you.38degrees.org.uk/petitions/ai-says-no-tell-keir-starmer-that-people-not-machines-should-oversee-life-changing-decisions

AI says NO: Tell Keir Starmer that people, not machines should oversee life-changing decisions

Clause 80 of the Government’s Data (Use and Access) Bill could strip away your right to decide whether people or machines make decisions that impact our lives. Big Tech and Whitehall are eager to cut costs by using AI algorithms to make crucial decisions about you. Soon, computers—not people—could determine the outcome of your benefit claims, job applications, or mortgage approvals. Right now, UK GDPR protects your right not to be simply subjected to life-changing decisions made solely by...

38 Degrees

Automated decision-making exposes racialised communities to greater discrimination from algorithmic biases, as seen in recruitment.

The UK Data Use and Access Bill will expand the use of decisions made solely by AI without human review, so unfair practices could go unchallenged.

#DUABill #dataprotection #codedbias #gdpr #AI #ukpolitics #databill

https://www.independent.co.uk/news/world/americas/robots-racism-algorithms-job-hiring-b1860835.html

How racist robots are being used in recruitment

Some systems have been shown to associate white names with being more qualified and to weed out applicants who went to women’s colleges

The Independent
Both Gemini and ChatGPT are so woke that they can't answer a simple evolution question due to easy to be seen fact.

#google #gemini #openai #chatgpt #gpt #codedbias #bias #ai
Il riconoscimento facciale ha tanti aspetti problematici - vedasi #CodedBias - ma finora aveva almeno escluso una parte vulnerabile e giustamente protetta della popolazione, i minorenni. Ora si scopre che, per addestrare meglio i propri algoritmi, il DHS (Department of Homeland Security) statunitense ha avviato da tempo la raccolta di immagini facciali di minori stranieri, inclusi infanti, anche non accompagnati, al confine col Messico
https://www.technologyreview.com/2024/08/14/1096534/homeland-security-facial-recognition-immigration-border/
#usa #messico
The US wants to use facial recognition to identify migrant children as they age 

A previously unreported project is intended to improve how facial recognition algorithms track children over time.

MIT Technology Review

"AI raises the stakes... data is not only used to make decisions about you, but rather to make deeply powerful inferences about people and communities."

Beware greater automated decision-making with fewer safeguards over our data.

The fight for algorithmic justice is imperiled by the #DataGrabBill.

#HandsOffOurData #DataGrab #GDPR #DPDI #DPDIBill #dataprotection #privacy #ukpolitics #humanrights #datarights #digitalrights #AI #codedbias #facialrecognition

https://themarkup.org/hello-world/2023/11/18/unmasking-ai-and-the-fight-for-algorithmic-justice

‘Unmasking AI’ and the Fight for Algorithmic Justice – The Markup

A conversation with Dr. Joy Buolamwini

Without oversight or strong data rights, facial recognition will further embed discriminatio if the #DataGrabBill becomes law.

Accuracy diminishes when the subject is a person of colour and the younger the person is, disproportionately misidentifying young Black men.

#HandsOffOurData #DataGrabBill #GDPR #DPDIBill #dataprotection #privacy #ukpolitics #facialrecognition #codedbias #ai #surveillance

https://www.amnesty.ca/surveillance/racial-bias-in-facial-recognition-algorithms/

Racial bias in facial recognition algorithms

Facial recognition makes racial discrimination worse. Learn more about how it threatens human rights and take action to ban it.

Amnesty International Canada

"The UK government has bungled what could have been an opportunity for real global AI leadership due to the Summit’s limited scope and invitees.

The agenda’s focus on future, apocalyptic risks belies the fact that government bodies and institutions in the UK are already deploying AI and automated decision-making in ways that are exposing citizens to error and bias on a massive scale."

🗣️ ORG's @abigail

#AISafetySummit #AISummitOpenLetter #AI #codedbias