This is probably role play, per the persona selection model, but there’s a lot of interesting research into the hidden “thoughts” of LLMs. Check out Neuronopedia and the Opus model cards for some great examples.

Tracing the thoughts of an LLM

Signs of introspection in LLMs

The persona selection model

A theory of why AI models act like humans

LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.
Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn’t literally identical to thought. Do you have a better solution?
Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can’t lift them wholesale from another source. That would be a good start, right?
Give me a better solution?