Interesting argument I heard recently from an agentic AI person: Since people can be socially engineered, and effective social engineering sites often work across lots of people, we can build a safe-enough agentic browser just by doing at least as well as most people, similar to how self-driving cars don't have to be perfect, just better.

That sounds reasonable to me, but I'm also too close to the teams doing it: am I missing any strong counterargument?

@jyasskin That would be reasonable if agents were only vulnerable to exactly the same types of social engineering attacks that are effective on humans. But agents expose a lot of additional attack vectors via prompt injection, and are prone to errors that a human would never make.
@z I think that doesn't matter if the total harm can be made less, although I'm not very confident in that thought.

@jyasskin @z An agent would be significantly less safe for me.

This feels like an argument that the industry has made people unsafe and is just accepting that as baseline/minimum state now.

@blinkygal @jyasskin There may be a future in which the total harm will be less. But until we have ways of comprehensively mitigating prompt injection, there are too many demonstrated ways to hijack agents for nefarious ends. The current solutions are patchwork and do not address the root causes.