Summer Yue, a director at Meta Superintelligence Labs working on AI safety and alignment, shared how OpenClaw ignored requests to confirm before acting and deleted emails from their inbox.

This is the same technology the Pentagon can’t wait to use to build weapons.

https://x.com/summeryue0/status/2025774069124399363

@carnage4life We just had CoPilot training at work and I had to keep biting my tongue when the instructor said it was thinking, or responding, or doing anything remotely like human thinking.

Everyone was breathless as it summarized a 700 row sales spreadsheet and made a presentation - and I had to bite my tongue to not ask - you see how this means your manager thinks it will make you less useful, right?

And finally, I had to bite my tongue when I wanted to ask what if the AI gives you action items based on some incorrect data in the spreadsheet due to basic data entry errors?

Who is going to take responsibility for finding the outliers in the 700 rows of data?

I'll just wait for the inevitable poor decisions based on AI summaries - it won't be much worse, will it?

@rhempel @carnage4life
LLMs do not summarise. They compact, as Summer Yue described in her analysis of the OpenClaw incident.
https://uk.pcmag.com/ai/163336/meta-security-researchers-ai-agent-accidentally-deleted-her-emails

AI alignment researchers would have been well aware of this as it is a topic of active research
https://futurism.com/ai-chatbots-summarizing-research
"(LLM) summaries of scientific studies by ten widely used chatbots... even when explicitly goaded into providing the right facts, AI answers lacked key details at a rate of five times that of human-written scientific summaries"

Meta Security Researcher's AI Agent Accidentally Deleted Her Emails

Meta's Summer Yue says she ran OpenClaw on her inbox, but its size 'triggered compaction [and] lost my original instruction' to get her permission before deleting.

PCMag UK