A new Quinnipiac poll shows AI adoption rising in the US but trust falling, with most Americans concerned about transparency, regulation and broader societal impact. As more people use AI tools, fewer believe the results are trustworthy. https://techcrunch.com/2026/03/30/ai-trust-adoption-poll-more-americans-adopt-tools-fewer-say-they-can-trust-the-results/ #AIagent #AI #GenAI #AIEthics #AIGovernance
As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch

AI adoption is rising in the U.S., but trust remains low, with most Americans concerned about transparency, regulation, and the technology’s broader societal impact, according to a new Quinnipiac poll.

TechCrunch

"Rarely has a new industry burned through so much good will in such a short time. This is both a chance and a danger."

New in Misaligned: "The AI Industry, Unloved" #AIEthics #AI #AIRegulation

https://read.misalignedmag.com/the-ai-industry-unloved-248ecd2d4304

Due to an error in a facial recognition system created by the startup clearview ai an innocent woman in the US spent 5 months in jail.

Judges tend to place complete trust in #ai generated results, while developers avoid responsibility because there is no malicious intent in their actions

Despite the real threat of unlawful arrests, law enforcement systems around the world are unlikely to abandon algorithms

#algorithmicbias #aiethics

https://edition.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition

Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she’s never visited

A Tennessee grandmother spent more than five months in jail after police used an AI facial recognition tool to link her to crimes committed in North Dakota – a state she says she’d never been to before.

CNN
Google said in a box above the results ~You would enjoy this film which is showing within 2km of your present location.~ lichtblick-kino.org/special/audr... I was horrified, but went. I strongly recommend the film & the (tiny neighbourhood art) cinema too. I stopped using google. #AI #AIEthics /fin

Lichtblick-Kino – Audre Lorde ...
Lichtblick-Kino – Audre Lorde – The Berlin Years 1984 to 1992

Lichtblick-Kino Berlin – Das Programmkino im Prenzlauer Berg. Kastanienallee 77, Tel. 030 - 44 05 81 79

Now on #Zenodo: “Second Physics and the Accountability Framework for Automated Driving Systems (ADS): Technical Limits, Human Oversight, and Responsibility for Courts and Regulators.”

🔗https://doi.org/10.5281/zenodo.19334238
#ADS #AIEthics

Second Physics and the Accountability Framework for Automated Driving Systems (ADS): Technical Limits, Human Oversight, and Responsibility for Courts and Regulators

Automated Driving Systems (ADS) are moving from closed, geofenced environments intoopen public roads. In golf courses, industrial sites, and campus shuttles, current technologyalready delivers what can reasonably be called automated driving. The remaining questionsfor courts and regulators are not whether ADS are technically possible, but under whatconditions they can be deployed at scale while preserving safety and assignableresponsibility.This technical note introduces a conceptual framework based on Second Physics, in whichlegal responsibility ρ is treated as a conserved quantity within a socio-technical system wherehumans, institutions, and technical artefacts interact. In this view, AI systems cannot be thefinal bearers of responsibility: only human or institutional actors endowed with an underlyingsource f₀ can ultimately carry ρ. Applied to ADS, this implies that talk of ‘AI fault’ cannoteliminate responsibility; it can only obscure the flows of ρ between manufacturers, softwareproviders, operators, insurers, regulators, and users.The note identifies three structural constraints that any acceptable legal regime for ADSshould respect: (i) an AI Non-Terminality Principle — AI cannot be the ultimateresponsibility bearer; (ii) a Control–Benefit Alignment Principle — responsibility shouldfollow effective control over ADS behaviour and long-term benefits from its operation; and(iii) an Evidence Preservation and Disclosure Principle — logs, sensor data, and softwarestates must be recorded, retained, and made available to the competent authorities afterserious accidents. Without such a framework, victims cannot realistically prove ‘AI fault’,and prudential solvency constraints will deter manufacturers and insurers from large-scaledeployment, regardless of technical feasibility. 

Zenodo
Well this is escalating (at least in Germany) – "The spy that folds the laundry" www.leadersnet.de/news/98612,d... -- quoting t3 quoting @[email protected] 's New Scientist article quoting me. #AIEthics illustrated with #AIKitsch

Der Spion, der die Wäsche falt...
Der Spion, der die Wäsche faltet: Die humanoiden Haushalts-Roboter kommen

Mit dem Marktstart humanoider Helfer wie dem "Neo" von 1X Technologies zieht die Robotik in die privaten Wohnzimmer ein. Doch hinter der Vision vom mechanischen Butler verbirgt sich ein technologisches Schattenreich aus permanenter Kameraüberwachung und menschlicher Fernsteuerung via... | 29.03.2026

Research shows large language models can unmask pseudonymous social media users with up to 90% precision. Experiments correlating accounts across platforms achieved 68% recall, challenging the assumption that pseudonymity protects online privacy. AI makes targeted deanonymisation cheap and scalable rather than requiring extensive manual effort. https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/ #AIagent #AI #GenAI #AIEthics
LLMs can unmask pseudonymous users at scale with surprising accuracy

Pseudonymity has never been perfect for preserving privacy. Soon it may be pointless.

Ars Technica
Trust in AI is a political and institutional issue, not just a technical one. Corporate explainability often serves market interests rather than protecting citizens’ rights, allowing harm and accountability evasion. Genuine accountability and democratic transparency are essential.
Discover more at https://smarterarticles.co.uk/trust-is-not-a-feature-the-corporate-capture-of-ai-transparency?pk_campaign=rss-feed
#HumanInTheLoop #AIethics #AlgorithmicAccountability #ResponsibleAI
Trust Is Not a Feature: The Corporate Capture of AI Transparency

Somewhere between the press releases and the product demos, something went quietly wrong with explainable AI. What began as a serious a...

SmarterArticles
Stanford researchers are warning that AI chatbots frequently provide unreliable advice when asked about personal matters, from financial decisions to relationship problems. A new study tested how models respond to sensitive queries and found consistent failures in accuracy and safety. The findings highlight growing concerns about the ethical implications of AI systems being used as de facto personal advisors without adequate safeguards. https://techcrunch.com/2026/03/28/stanford-study-outlines-dangers-of-asking-ai-chatbots-for-personal-advice/ #AIagent #AI #GenAI #AIEthics #Stanford
Stanford study outlines dangers of asking AI chatbots for personal advice | TechCrunch

While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

TechCrunch