I called it a year ago - it's an S-curve.

If I were an investor in a proprietary LLM, I'd be looking to offload those shares soon.

https://garymarcus.substack.com/p/evidence-that-llms-are-reaching-a

Evidence that LLMs are reaching a point of diminishing returns - and what that might mean

The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that β€œthe current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:

Marcus on AI
@jasongorman @hazelweakly It's still GPT4...
@erispoe @hazelweakly It's GPT-4 Turbo, which many suspect was a failed attempt at launching GPT-5.
@jasongorman @hazelweakly I haven't seen evidence that turbo is a different model, which we would see if that was a failed GPT5. That's the first time I hear that theory though.