This is one of the things that frustrates me about these LLM based coding tools, too much wrong headed certainty. I've been using these classes and their ancestors for going on 30 years now and I sure as hell don't know off the top of my head. Saying I don't know is tons better than hallucinating an incorrect answer.
@paul that’s the problem of training the model: it breeds out the concepts of uncertainty and not knowing. As a model, you might or might not get a cookie if you hallucinate, but you are guaranteed to not get anything if you refuse to answer.