My AI agents only helped me halfway with making this image, but halfway is better than no way. Who else is optimizing for time saved wherever they can get it?

Always trying to find ways to cram more life into my life.

How I'm doing things:

🔓️ One sensitive machine logged into my main account where I'm very careful.
👩‍💻 One dev machine with another set of accounts entirely where I'm pretty careful.
🔥 One burner machine logged into a separate new email account with its own separate Claude, ChatGPT, and OpenClaw. Full auto on Codex and Claude Code, with a lot of permissions given.
Everything was done fresh for this, including a burner phone and debit account. Does not go on my home wifi. The only things I pass to myself from it are images (screenshotted) and text (plain text, never code) via email (a lesser one).
(Why no Ollama Mac Mini? Because I'm more interested in saving time than money at the moment and prefer the most capable models over local-only cheaper solutions. But I would consider it when I need to run pipelines where the benefits outweigh the costs.)
Security experts, am I the right amount of paranoid? Too much? Too little?
Part of the game here is to see how much damage I could do on an isolated disposable 🔥 burner machine to get a visceral sense of what a 100% non-engineer would experience if they did their whole setup by asking a chatbot and blindly following the instructions.
‼️ For my non-engineer friends skim reading this, the key message is that this is NOT SAFE if you do it with machines/accounts you care about. Only consider it on a segregated setup that's disconnected from whatever has access to your inbox, documents, and credit cards.
The minute your yes-to-everything playground shares accounts, recovery paths, outputs, or habits with your real life, risk leaks across.
For my leader friends, if your staff start doing this on work machines, they might be able to unintentionally burrow past safety barriers IT has put in place.
Shadow agents (agents unsanctioned and unmanaged by IT) could expose your organization to all kinds of risks and given how magical it feels to get work done without having to think about what you're doing, there will be a real temptation for your staff to set them up as they become more capable.
Security is part of the solution, but incentives are the bigger part. Make sure you're thinking about what your organization incentivizes with respect to shadow agents, trust, and how work gets done.
Would love to hear what my community is using agents for. Drop a comment!