Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost
➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.