Judith Sieker presents "Beyond the Bias: Unveiling the Quality of Implicit Causality Prompt Continuations in Language Models" at #INLG2023

Language models struggle with implicit causality verbs, so the researchers set out to study ICs and coreference with these models

Conclusion
- #LLMs struggle with coherent continuations for relatively simple prompts, beyond the #ImplicitCausality bias
- #InformationDensity of the prompt and decoding method impact text quality
- Modifying IC prompts affects capture of IC bias, depending on decoding strategy; however
bias congruence doesn't guarantee higher continuation quality
- Surprisingly low correlation between automatic metrics and human judgments, underscoring
#NLG metric challenges and caution in interpretation