I just used ChatGPT to write some code and it was fine. It was trig I knew how to do but hadn't memorized. I double-checked afterward, it worked properly.
In practical terms LLMs turn out to be a kind of idea reinforcement device. The more often an idea is expressed in the corpus, the more likely it is to be expressed by the model.
"Stochastic parrot" is a good shorthand, but parrots repeat verbatim. LLMs are more like a convert, something indoctrinated. Like a wind-up evangelist.