This blogpost's sources are completely challenging my pre-existing assumptions about the methodological argument for using reverse-coded items on a survey. I might have to let the evidence totally change one of my scientific practices despite my intuitive feeling of what's "right." NEAT!

https://yannicmeier.de/2026/03/03/why-reversed-items-can-be-problematic-in-survey-research/

Why reversed items can be problematic in survey research

In quantitative psychological research, questionnaires with Likert-style items are mostly used to assess variables like emotions, cognitions, and dispositions. Sometimes, it is possible to fall bac…

Yannic Meier

but how to balance this against the recommendations to balance scales against acquiescence bias? idk 💀

https://link.springer.com/article/10.1007/s11109-026-10124-z#Sec23

Acquiescence Bias and Criterion Validity: Problems and Potential Solutions for Agree-Disagree Scales - Political Behavior

Scholars frequently measure dispositions like populism, conspiracism, racism, and sexism by asking survey respondents whether they agree or disagree with s

SpringerLink
@grimalkina I’ve seen surveys that have a “select answer X” question just to make sure respondents are actually paying attention. I’m guessing the idea is that you can toss any surveys that didn’t get that question “correct” (not sure that addresses your specific question here, not my jargon, but I’ve found it interesting to see in the wild)
@TindrasGrove yes, that is a classic attention check. There are also many techniques to look for unusual coherence (e.g., too many of the same responses) to flag poor response quality