The real wildcard in the AI agent transformation is what happens when AI companies charge the "real price" for these services?

There's a chance that they provide so much value that companies absorb the price increases similar to what happened when Uber stopped subsidizing rides. But there's also a chance, AI agents are no longer affordable outside of specific niches. Anthropic charging $25 per code review is a harbinger of the future.

https://the-decoder.com/anthropics-claude-code-subscription-may-consume-up-to-5000-in-compute-per-month-while-charging-the-user-just-200/

@carnage4life

I think this part in the article is where we will go (based on personal experience working with smaller models and seeing their progress too):

> To hedge against that risk, Cursor is betting on its own models. A team of about 20 AI researchers is developing the Composer model family, built on open-source foundations like DeepSeek, Kimi, and Qwen, then fine-tuned with proprietary data and reinforcement learning. Version 2.0 marked the release of Cursor's first homegrown coding model. Composer 1.5 is fast, the second-most popular model on the platform, and significantly cheaper for Cursor to run than paying for Anthropic's large models.

I've saturated my dev brain a while ago already, so I think that local/opensource models + a return to more sound engineering practices instead of everybody losing their mind burning tokens at random will be where we are going.

#llm #llms