@jonny i spent a long time building an LLM tool for my wife that uses RAG to provide answers to certain specific legal questions. it was fun, it seemed to work! then i was reading /r/LLMDevs and it turns out models frequently just ignore the context you give them with RAG and answer from their training data, so you have to add elaborate checks to verify that they are actually using the context in their answers. 🙃 like, what are we even doing here.