I assume LLM output is de-facto untrustworthy and must be double checked with source material.

I have no idea wtf to do when the source material/documentation was also written by AI.