@[email protected] let me explain. You talk with chatgpt about French revolution. You say something, it replies something. Response is from facts(it's learning). That dialogue is now context for further responses. But now LLM needs to search in it's learning, plus the context, to respond with something with that is accurate (as per it's learning) and suits for context of conversation. Current LLM is bad at this kind conversation. Day in chat does not involve much facts, so it's easier to satisfy both