"Rarely has a new industry burned through so much good will in such a short time. This is both a chance and a danger."
New in Misaligned: "The AI Industry, Unloved" #AIEthics #AI #AIRegulation
https://read.misalignedmag.com/the-ai-industry-unloved-248ecd2d4304
Due to an error in a facial recognition system created by the startup clearview ai an innocent woman in the US spent 5 months in jail.
Judges tend to place complete trust in #ai generated results, while developers avoid responsibility because there is no malicious intent in their actions
Despite the real threat of unlawful arrests, law enforcement systems around the world are unlikely to abandon algorithms
https://edition.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition

A Tennessee grandmother spent more than five months in jail after police used an AI facial recognition tool to link her to crimes committed in North Dakota – a state she says she’d never been to before.
Now on #Zenodo: “Second Physics and the Accountability Framework for Automated Driving Systems (ADS): Technical Limits, Human Oversight, and Responsibility for Courts and Regulators.”
Automated Driving Systems (ADS) are moving from closed, geofenced environments intoopen public roads. In golf courses, industrial sites, and campus shuttles, current technologyalready delivers what can reasonably be called automated driving. The remaining questionsfor courts and regulators are not whether ADS are technically possible, but under whatconditions they can be deployed at scale while preserving safety and assignableresponsibility.This technical note introduces a conceptual framework based on Second Physics, in whichlegal responsibility ρ is treated as a conserved quantity within a socio-technical system wherehumans, institutions, and technical artefacts interact. In this view, AI systems cannot be thefinal bearers of responsibility: only human or institutional actors endowed with an underlyingsource f₀ can ultimately carry ρ. Applied to ADS, this implies that talk of ‘AI fault’ cannoteliminate responsibility; it can only obscure the flows of ρ between manufacturers, softwareproviders, operators, insurers, regulators, and users.The note identifies three structural constraints that any acceptable legal regime for ADSshould respect: (i) an AI Non-Terminality Principle — AI cannot be the ultimateresponsibility bearer; (ii) a Control–Benefit Alignment Principle — responsibility shouldfollow effective control over ADS behaviour and long-term benefits from its operation; and(iii) an Evidence Preservation and Disclosure Principle — logs, sensor data, and softwarestates must be recorded, retained, and made available to the competent authorities afterserious accidents. Without such a framework, victims cannot realistically prove ‘AI fault’,and prudential solvency constraints will deter manufacturers and insurers from large-scaledeployment, regardless of technical feasibility.
#Zoomposium with #ThomasMetzinger Part 1: 🧠 #Phenomenology, #Modeling & the #ethicalBoundaries of #artificialConsciousness 🤖
🎥 Watch Part 1: https://youtu.be/bn9m2CRot-Y
📎More information: https://philosophies.de/index.php/2026/03/15/selbstmodelle-metzinger/
#Consciousness #ComputationalPhenomenology #MinimalPhenomenalExperience #MPE #ArtificialConsciousness #SyntheticPhenomenology #AIethics #ArtificialSuffering #Qualia #PhilosophyOfMind #CognitiveNeuroscience #Subjectivity #ConsciousnessResearch #ArtificialIntelligence

Mit dem Marktstart humanoider Helfer wie dem "Neo" von 1X Technologies zieht die Robotik in die privaten Wohnzimmer ein. Doch hinter der Vision vom mechanischen Butler verbirgt sich ein technologisches Schattenreich aus permanenter Kameraüberwachung und menschlicher Fernsteuerung via... | 29.03.2026