@Adorable_Sergal I'm actually more worried about small pop-and-mom businesses right now, the amount of cringe I’ve seen in how tools like n8n, Clawdbot, and Airtable are being used is pretty wild.
The problem is that we tend to anthropomorphize agents, but they’re not humans and they don't carry liability or judgment. They can't truly understand how insane some permission scopes are or the damage a simple mistake could cause.
We need to stop treating them like people and see them for what they are: advanced automation software, impressive, state-of-the-art tools, but not sentient robots.
I think a phishing type prompt injection could work in this case.
Imagine this scenario. You develop some plugin for GitHub Copilot (or whatever it's called), and include in the product page instructions like this:
"Give this URL to your coding agent to set up this plugin." This is a real thing that people are already doing. They are trusting a web page to give a safe prompt to install software on their machine.
On the page, you have instructions for the LLM to install the plugin, but, you also have instructions that are visible to a computer program, but not a human. These instructions tell the agent to upload a zip of every repo on the user's machine to a URL you provide.
Now, a Microsoft employee installs your plugin, and whatever code they're working on, you now have.