#ai #codegeneration #copilot #hallucination #Java #LLM #maven #slopsquatting #softwaresecurity #supplychainsecurity
https://foojay.io/today/why-java-developers-over-trust-ai-dependency-suggestions/
I had to deal a bit with the "Supply-chain Levels for Software Artifacts" (SLSA) "standard":
https://slsa.dev/
IMO it's a joke, since they do not properly deal with threats from "Includ[ing] a vulnerable dependency (library, base image, bundled file, etc.)". They essentially say "A future version of this standard might deal with that":
https://slsa.dev/spec/v1.2/threats
This has been the main entry point of the past supply chain attacks (XZ backdoor, litellm, Shai-Hulud, ...). A supply-chain security standard that doesn't properly deal with vulnerabilities in dependencies completely misses the point. It's like installing alarms on your windows (to catch burglars trying to enter your home through the windows) when your front door doesn't have a lock.
#SLSA #supplychain #supplychainsecurity #xzbackdoor #ShaiHulud #litellm

SLSA is a security framework. It is a check-list of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure in your projects, businesses or enterprises. It’s how you get from safe enough to being as resilient as possible, at any link in the chain.
This blog explores how cyber threats to chip #manufacturing OT can disrupt global supply chains and how a programmatic CPS approach helps reduce risk, improve visibility, and keep production running.
📖 Read here: https://claroty.com/blog/safeguarding-the-operational-infrastructure-behind-the-worlds-semiconductors
Axios supply chain hit.
Fake Teams error → RAT → npm compromise.
Maintainer targeted via social engineering.
UNC1069 linked.
Human layer = attack surface.
Follow TechNadu.
Meta paused work with a $10B AI data vendor after hackers poisoned an open-source Python library called LiteLLM and walked out with four terabytes of data. So, that's bad. And the worst part? The stolen data might include the actual training methodologies that Meta, OpenAI, Anthropic, and Google paid billions to develop. Think about what that means. You can't protect your crown jewels if they're sitting inside a vendor who's connected to your three biggest competitors, all sharing the same open-source tools, all exposed by the same 40-minute window on PyPI before anyone noticed.
🎯 The attack chain here is worth understanding: hackers compromised a security scanner called Trivy, used that access to get credentials for a LiteLLM maintainer, then published two malicious package versions that lasted less than an hour before removal. Forty minutes. That's all it took.
💼 Mercor is not some sloppy startup. It's 22-year-old founders, $500M annualized revenue, and clients at the very top of the AI industry. Sophistication doesn't protect you from a poisoned dependency you never thought to audit.
🔍 The question I'd be asking right now if I were a CISO at any of these labs isn't "were we breached." It's "how many vendors in our training pipeline are running LiteLLM, and did we even know?"
Most companies audit their own software. Almost nobody audits the software their vendors use to build the data they're buying.
https://thenextweb.com/news/meta-mercor-breach-ai-training-secrets-risk
#Cybersecurity #AIRisk #SupplyChainSecurity spc #security #privacy #cloud #infosec #ThirdPartyRisk
🧠 The real risk is your supply chain.
7ASecurity wins OSTIF Bug of the Year 2025.
👉https://7asecurity.com/blog/2026/03/7asecurity-ostif-bug-of-the-year-award-2025/
🧠 The real risk is your supply chain.
7ASecurity wins OSTIF Bug of the Year 2025.
👉https://7asecurity.com/blog/2026/03/7asecurity-ostif-bug-of-the-year-award-2025/
We often talk about supply chain risk like it only means foreign hardware, malware, or compromised vendors.
But it also includes ordinary dependencies.
SDKs. Hosted scripts. Embedded web content. Push vendors. Analytics platforms. Remote code paths.
When government ships an app, those choices carry more weight because public trust is attached to them.
#CyberSecurity #SupplyChainSecurity #AppSec #SecurityArchitecture
What the Claude Code Leak Teaches Us About AI Supply-Chain Security
This article discusses an issue involving insecure AI model distribution, as demonstrated by the Claude Code leak. The root cause was a lack of proper encryption and protection mechanisms for sensitive artificial intelligence code during the development and distribution process. By exposing the Claude Code, the attacker disclosed critical components of a proprietary AI system, potentially compromising its functionality and intellectual property. The researcher leveraged social engineering techniques to convince a developer to share the source code under the guise of collaboration on an unrelated project. The mechanism behind this flaw was that the developer did not adequately safeguard the Claude Code during distribution, allowing it to be easily accessed by malicious actors. The impact included potential reverse-engineering and misuse of the proprietary AI system. No specific payout or outcome information was mentioned in the article. To remediate this issue, implement end-to-end encryption for sensitive data, especially during development and distribution phases. Key lesson: AI supply chains must be secure to protect intellectual property and maintain system functionality. #AI #Cybersecurity #SupplyChainSecurity #DataEncryption #Infosec