Sure. We can have different definitions of thinking and we can disagree that AI in its current form even thinks at all. It is all very debatable as it for aure gives the illusion of thinking and even has moments that are very convincing, but then later you realize it really does not understand what it thinks up. It just knows the pattern and predictions and not really the content itself which really makes you wonder if it really is thinking at all to your point I suspect. I can see your point if so, but thinking seems like a spectrum of traits and as far as my experience, LLMs showcase some thinking, just not nearly enough. Still very impressive all the same despite the hallucinations and odd errors.
But if I missed your point, could you please rephrase.