I just used GPT-5.3-Codex-Spark for a simple implementation and it ran out of usage in under 5 minutes.
*This* is what will ultimately cause the current business model to fail outside of big tech and enterprise.
You can't claim the correct usage of an LLM is to pop it in a verification loop, while also charging a per-token access/usage fee that means it's unusable for that purpose.
Perhaps in a few years we'll see local LLMs that work well on consumer hardware with the same capabilities?