Maybe not-so-hot take: the AI agents we have cannot be trusted, and must be put in a cage.
AI agents belong in prison
Last Friday, Opus, which I had allowed `terraform plan` permissions to help troubleshoot some integration, suddenly asked to do `terraform apply` even though the plan showed that a production database would get deleted and recreated (😱), even if I had explicitly instructed it to help me investigate only, and change nothing. Because it at least asked, catastrophe was averted, but it did get my pulse up. The problem of course is not really the model, any model can go off the reservation. The problem was that I had given the agent (part of) my own access for a bit of convenience - and if you run your LLM with access to ~/.ssh, ~/.config/gcloud, ~/.aws, and your kubeconfig, it may hallucinate your production env away.





