The role of a "human in the loop" isn't to prevent errors. That human is there to be blamed for errors:
https://pluralistic.net/2024/10/30/a-neck-in-a-noose/#is-also-a-human-in-the-loop

#organizations #humanInTheLoop #AIRisks

Pluralistic: AI’s “human in the loop” isn’t (30 Oct 2024) – Pluralistic: Daily links from Cory Doctorow

Autonomous AI Exposes Gaps in Enterprise Resilience Plans

As organizations deploy autonomous AI, they're exposing gaps in their resilience plans, putting business continuity at risk and creating new operational and infrastructure challenges for IT teams to navigate. Traditional security and recovery models are ill-equipped to handle the machine-speed, dynamic environments that…

https://osintsights.com/autonomous-ai-exposes-gaps-in-enterprise-resilience-plans?utm_source=mastodon&utm_medium=social

#AutonomousAi #EnterpriseResilience #OperationalRisk #InfrastructureSecurity #AiRisks

Autonomous AI Exposes Gaps in Enterprise Resilience Plans

Discover how autonomous AI exposes gaps in enterprise resilience plans and learn steps to mitigate operational risks - read now and build a stronger AI strategy.

OSINTSights
LLMs Corrupt Your Documents (and the Theory Dies Twice)

Microsoft Research put numbers on it. 25% degradation over 20 interactions. No plateau.

cekrem.github.io

The prEN 18228 Problem: Why Your AI Risk Assessment Will Fail the First Real Test

Most AI risk assessments look solid on paper and collapse the moment a regulator, client, or auditor asks a simple question. What exactly can go wrong, how likely is it, and what does it cost when it does. That gap is about to matter more. A new European standard, prEN 18228, sets out a formal process for managing risks in AI systems across their full life cycle. It is designed to support regulatory expectations by requiring organizations to identify hazards, estimate and evaluate risks, […]

https://hernanhuwyler.wordpress.com/2026/05/08/the-pren-18228-problem-why-your-ai-risk-assessment-will-fail-the-first-real-test/

Washington insiders admit AI is slipping beyond government control

https://fed.brid.gy/r/https://nerds.xyz/2026/05/ai-regulation-policy-insiders-warning/

The window of opportunity is still open.
'The fact that agentic Al systems can currently undertake only comparatively simple tasks does not mean the policy community can sit and wait. The early stages of development of a technology provide critical windows of opportunity—that can close very quickly-for implementing effective safety and security measures.'

Excerpt from 'Before it's too late: Why a world of interacting Al agents demands new safeguards' by Dr Vincent Boulanin, Dr Alexander Blanchard and Dr Diego Lopes da Silva for #SIPRI: https://bit.ly/46LQpnS

#agenticAI #lobbying #lobbies #Microsoft #AIEthics #GAFAM #AI #civilLiberties #EU #AIRisks #tech #AIAct

Before it’s too late: Why a world of interacting AI agents demands new safeguards

Increasingly capable and autonomous AI systems cooperating at scale could have unpredictable results for international peace and security.

SIPRI
Iran war shows how AI speeds up military ‘kill chains’

The speed and scale of war are being enhanced by AI systems – but they also bring new risks for civilians and military combatants.

IwPost
AI has crossed a threshold – what Claude Mythos means for the future of cybersecurity | The-14

AI crosses a new frontier as Claude Mythos shows autonomous cyberattack capability, raising urgent questions on security, governance, and global risk management

The-14 Pictures
Half of AI health answers are wrong even though they sound convincing – new study | The-14

Study finds AI health answers often sound convincing but are wrong, exposing risks of misinformation, weak references, and dangers of relying on chatbots.

The-14 Pictures