What do I even do. Someone asked me some questions about our security process, but they fed my responses into an LLM, and are now claiming that the LLM's output represents things I said, when it objectively doesn't.
@silverwizard this may be in the wrong direction entirely, it: can you ask that same llm to compare your original document and the updated document and have it analyze the accuracy of the summary?
@robdrimmie so the person asked me a series of questions, which was not the same list of questions they were asked. They recorded my answers, then had the LLM fill in the questions using the transcript as part of the prompt. It's just kinda wild as a choice. It feels intentionally putting words in my mouth.

@silverwizard @robdrimmie I thought that they were perhaps being lazy and summarizing what you said incorrectly, but that's so much worse.

This does feel like a way of intentionally putting words in someone's mouth without any accountability (because the computer did it, not me!)

@silverwizard @robdrimmie It might be worth reminding them that an LLM is never allowed to say "I don't know", and thus just makes things up when there is insufficient information.
@me @robdrimmie yeah, it's a pretty weird situation. In theory these are actually words I said, but rearranged and in a new context and a couple extra words adds, which is weird as hell.
@silverwizard @robdrimmie It's almost like they were put into a machine whose sole purpose is to just assemble words into a plausable seeming arrangement or something...
@silverwizard yeah, that is bonkers!
@robdrimmie yeah, poor dude just is excited about LLMs and trying to figure out use cases. It just feels bad