A quotation from Eric Hoffer

When cowardice is made respectable, its followers are without number both from among the weak and the strong; it easily becomes a fashion.

Eric Hoffer (1902-1983) American writer, philosopher, longshoreman
Passionate State of Mind, Aphorism 203 (1955)

More about this quote: wist.info/hoffer-eric/81117/

#quote #quotes #quotation #qotd #erichoffer #acceptability #acquiescence #cowardice #cowardliness #fashion #fear #respectability #socialcontract #socialill #timidity #weakness

Hoffer, Eric - Passionate State of Mind, Aphorism 203 (1955) | WIST Quotations

When cowardice is made respectable, its followers are without number both from among the weak and the strong; it easily becomes a fashion. See also Hoffer (1971).

WIST Quotations

Being a #leftist should not be about following the #norms of your #society for the bounds of #acceptability for #radical beliefs. It should be about pushing #change in those norms.

We don't change the #Left to be within the #OvertonWindow. We change the Overton window to include the Left.

A quotation from Judith Martin

There are plenty of people who say, “We don’t care about etiquette, but we can’t stand the way so-and-so behaves, and we don’t want him around!” Etiquette doesn’t have the great sanctions that the law has. But the main sanction we do have is in not dealing with these people and isolating them because their behavior is unbearable.

Judith Martin (b. 1938) American author, journalist, etiquette expert [a.k.a. Miss Manners]
Interview (1995-03-06) by Virginia Shea, “Miss Mannners’ Guide to Excruciatingly Correct Internet Behavior,” Computerworld, Vol. 29, No. 10

Sourcing, notes: wist.info/martin-judith/78045/

#quote #quotes #quotation #qotd #judithmartin #missmanners #acceptability #behavior #etiquette #isolation #manners #ostracization #sanction #punishment

Happy to share our new paper “Language model acceptability judgements are not always robust to context” https://arxiv.org/abs/2212.08979! We prepend several kinds of context to minimal linguistic #acceptability test pairs and find #LMs (#OPT, #GPT2) can still achieve strong performance on #BLiMP & #SyntaxGym, except in some interesting cases. 🧵 [1/7]

Joint work with @jon , @kanishka, @amuuueller, @keren fuentes, @roger_p_levy, @Adinawilliams

Language model acceptability judgements are not always robust to context

Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.

arXiv.org
#Acceptability - The quality of being acceptable; acceptableness.