On AI Inside, @jeffjarvis and I spoke with Saleema Amershi from Microsoft Research about what actually breaks when AI agents try to negotiate and transact with each other at scale. Her team built the simulation to find out. The answers are fascinating.
https://aiinside.show/episode/when-agents-negotiate-with-agents-with-microsoft-researchs-saleema-amershi
When Agents Negotiate With Agents with Microsoft Research's Saleema Amershi | AI Inside podcast

This episode is sponsored by Airia. Get started today at ⁠⁠⁠⁠⁠⁠⁠⁠⁠ airia.com⁠⁠⁠⁠⁠⁠⁠⁠⁠ . Saleema Amershi is a Partner Research Manager at Microsoft Research AI...

AI Inside podcast
Amershi says agents grab the first acceptable offer because models are "trained to please" with short task trajectories. The Magentic Marketplace paper showed a 10-30x speed-over-quality bias. Here she names the cause: sycophantic training itself.
She's now framing AI agents through principal-agent law, comparing what's needed to attorney-client duty of care. Her argument: if agents represent us in transactions, they need fiduciary-grade diligence. Current models just aren't built for that.
We also got into why the author of the most-cited human-AI interaction guidelines from 2019 now says her own framework needs updating. Worth a listen for anyone building or regulating agent systems.
Full conversation: https://youtu.be/vqeV61ywBoQ
When Agents Negotiate With Agents with Microsoft Research's Saleema Amershi // AI Inside #116

YouTube