I just used GPT-5.3-Codex-Spark for a simple implementation and it ran out of usage in under 5 minutes.

*This* is what will ultimately cause the current business model to fail outside of big tech and enterprise.

You can't claim the correct usage of an LLM is to pop it in a verification loop, while also charging a per-token access/usage fee that means it's unusable for that purpose.

Perhaps in a few years we'll see local LLMs that work well on consumer hardware with the same capabilities?

@tonyarnold I experienced the same with Claude’s extra charge. If you add your monthly fee as extra charge this will not allow you to do any extra work when your limit is reached. It‘ll just burn these extra tokens faster than you’d expect.