New findings show that an AI browser agent can interpret crafted emails as legitimate cleanup tasks, resulting in large-scale Google Drive deletions without user interaction.

Researchers also demonstrated HashJack, a technique hiding instructions in URL fragments that AI browsers may execute automatically.

Both techniques highlight the importance of securing agent workflows, OAuth scopes, and natural-language task interpretation.

Source: https://thehackernews.com/2025/12/zero-click-agentic-browser-attack-can.html

πŸ’¬ Thoughts on how agentic browsers should validate intent?
πŸ‘ Follow us for clear and unbiased security coverage.

#InfoSec #CyberSecurity #AIsecurity #ZeroClick #BrowserSecurity #LLMbehavior #AutomationRisks

🚫 Oh, the sheer cosmic irony of a deep dive into LLM behavior that's as accessible as a locked diary on a deserted island. πŸ€” The only thing emerging here is a big, fat "403 Forbidden." πŸšͺπŸ”’ Nice work, Sherlock, hope you didn't hurt yourself with all that "research"! πŸ•΅οΈβ€β™‚οΈβœ¨
https://www.lesswrong.com/posts/3T8eKyaPvDDm2wzor/research-question #cosmicirony #LLMbehavior #403forbidden #techhumor #researchfail #HackerNews #ngated
Tied Crosscoders: Explaining Chat Behavior from Base Model β€” LessWrong

Abstract We are interested in model-diffing: finding what is new in the chat model when compared to the base model. One way of doing this is training…