Suspect Arrested For Allegedly Throwing Molotov Cocktail at Sam Altman's Home
Suspect Arrested For Allegedly Throwing Molotov Cocktail at Sam Altman's Home
Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data. So, that's bad. And the worst part? The stolen data might include the actual training methodologies that Meta, OpenAI, Anthropic, and Google paid billions to develop. Think about what that means. You can't protect your crown jewels if they're sitting inside a vendor who's connected to your three biggest competitors, all sharing the same open-source tools, all exposed by the same 40-minute window on PyPI before anyone noticed.
🎯 The attack chain here is worth understanding: hackers compromised a security scanner called Trivy, used that access to get credentials for a LiteLLM maintainer, then published two malicious package versions that lasted less than an hour before removal. Forty minutes. That's all it took.
💼 Mercor is not some sloppy startup. It's 22-year-old founders, $500M annualized revenue, and clients at the very top of the AI industry. Sophistication doesn't protect you from a poisoned dependency you never thought to audit.
🔍 The question I'd be asking right now if I were a CISO at any of these labs isn't "were we breached." It's "how many vendors in our training pipeline are running LiteLLM, and did we even know?"
Most companies audit their own software. Almost nobody audits the software their vendors use to build the data they're buying.
https://thenextweb.com/news/meta-mercor-breach-ai-training-secrets-risk
#Cybersecurity #AIRisk #SupplyChainSecurity spc #security #privacy #cloud #infosec #ThirdPartyRisk
We keep worrying about AI doing something evil. Which it might, but right now, there’s a risk in the plumbing supporting it. Three vulnerabilities in LangChain and LangGraph, path traversal, unsafe deserialization, SQL injection. Not AI-specific attacks. They’re not novel nor sophisticated but these are the kinds of bugs we've been patching since the late '90s. One of them scored a severity of 9.3 out of 10. "The biggest threat to your enterprise AI data might not be as complex as you think." Remember that you're building AI on top of frameworks you didn't write, can't fully audit, and update whenever it's convenient. That's the actual problem.
🔐 Path traversal lets attackers read arbitrary files from the host system, including credentials
🔑 Unsafe deserialization exposes API keys and environment variables at runtime
🗄️ SQL injection in the checkpointing layer leaks conversation history from your AI agents
All three are fixed now. But "fixed" only matters if you've actually applied the patches across every integration. Most organizations haven't.
The lesson isn't about AI security. It's that AI doesn't change what good security engineering looks like. Input validation, parameterized queries, strict path sandboxing. This is stuff your dev team learned before ChatGPT existed.
If you're deploying AI pipelines and you haven't done a security review of the frameworks underneath them, you're not running an AI strategy. You're running a trust exercise.
https://www.csoonline.com/article/4151814/langchain-path-traversal-bug-adds-to-input-validation-woes-in-ai-pipelines.html
#CyberSecurity #AIRisk #AppSec #security #privacy #cloud #infosec
Two leading AI researchers wrote a book arguing that building superhuman AI will lead to human extinction. Their case: once AI surpasses us, there's no reliable way to control what it pursues.
Not everyone agrees. But the debate is worth following.
Here's the full story: https://www.pasadenastarnews.com/2026/03/28/everyone-dies-why-two-top-scientists-are-ai-doomers/

As firms increasingly incentivize employees to build and oversee complex teams of agents—for example, by measuring and rewarding token consumption as a proxy for performance—people are finding themselves pushed to their cognitive limits. Participants in a recent study described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches. The authors call this phenomenon “AI brain fry,” defined as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity. This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit. The findings also show how AI-driven workflows can be designed to diminish burnout and point toward specific manager, team, and organizational practices to avoid mental fatigue even as AI work intensifies.