Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
@ashteranic @histoftech I just saw they “fixed” it: https://github.com/mdn/yari/pull/9215
That’s like when a lion escapes its cage at the zoo and they just put up a sign at that entrance saying “Caution, loose lion” instead of closing the zoo until the lion is back in its cage.
Summary #9214 #9208 Problem We don't explain that outputs from AI Help are not guaranteed to be accurate, and that the links to the docs used to generate the prompt exist to validate the response...
OMG this is amazing I wish I could boost it once for each skipped step