AI Companies are allowing everyone to install unverified code, and no one is stopping them.

Figma's MCP tool has just had a serious security issue that allowed hackers to execute code remotely.

New MCPs are released daily, but AI companies fail to verify their safety before they are used by the public.

- Employees install whatever they find online.
- Security teams can't review everything.
- And we get Shadow AI that's everywhere.

One unsafe MCP could let attackers get into your data, or someone else's.

What OpenAI and Anthropic should do:

→ Mandatory code signing and developer verification for all MCPs
→ Built-in sandboxing - MCPs should run in isolated environments with zero host access by default
→ Explicit permission models - users must approve each capability that an MCP requests
→ Version pinning with alerts when the MCP code changes
→ Give enterprise IT centralized MCP registry controls
→ Enterprise admin dashboards to see what MCPs are running across the org
→ Observability and logging for all MCP actions
→ Human-in-the-loop workflows for high-risk operations

This is shadow IT on steroids, and every CISO should be losing sleep over this.