Genuine question: why do we expect LLMs like GPT-x to be both truth/knowledge machines and fabulists (i.e., creative generators)? Aren’t the two tasks inherently contradictory? If it needs to both be accurate AND needs to be able to lie (make up arbitrary facts, say, for a pieces of fiction), how does the same model do both reliably?