This blogpost's sources are completely challenging my pre-existing assumptions about the methodological argument for using reverse-coded items on a survey. I might have to let the evidence totally change one of my scientific practices despite my intuitive feeling of what's "right." NEAT!

https://yannicmeier.de/2026/03/03/why-reversed-items-can-be-problematic-in-survey-research/

Why reversed items can be problematic in survey research

In quantitative psychological research, questionnaires with Likert-style items are mostly used to assess variables like emotions, cognitions, and dispositions. Sometimes, it is possible to fall bac…

Yannic Meier

but how to balance this against the recommendations to balance scales against acquiescence bias? idk 💀

https://link.springer.com/article/10.1007/s11109-026-10124-z#Sec23

Acquiescence Bias and Criterion Validity: Problems and Potential Solutions for Agree-Disagree Scales - Political Behavior

Scholars frequently measure dispositions like populism, conspiracism, racism, and sexism by asking survey respondents whether they agree or disagree with s

SpringerLink
@grimalkina I’ve seen surveys that have a “select answer X” question just to make sure respondents are actually paying attention. I’m guessing the idea is that you can toss any surveys that didn’t get that question “correct” (not sure that addresses your specific question here, not my jargon, but I’ve found it interesting to see in the wild)
@TindrasGrove yes, that is a classic attention check. There are also many techniques to look for unusual coherence (e.g., too many of the same responses) to flag poor response quality
@grimalkina I saw this in the wild just last week. Looked for existing literature on a scale some colleagues had used and the key publication (not by the survey developer) basically said, the one reverse coded item here sucks and wrecks the scale’s psychometric properties. Sure enough, that was true in the dataset I’d been handed, too. I do suspect some failure on the part of the scale’s authors to interrogate whether their construct was actually unidimensional. I think if they’d stopped to ask themselves that, they’d have written a scale with more than one such item.
@emjonaitis good thoughts about the places where reliability and construct coherence slides 😬 I find the loading additional constructs problem very big and believable
@grimalkina oh, huh, that’s interesting. As a survey taker I dislike surveys that mix in negated questions because half the time I miss the negation (yay, reading comprehension I guess) but I never considered whether I weighed positive and negative adjectives describing the same thing the same. Seem unlikely I do, from this write up, and I suspect it’s dead on.
@wordshaper it DEFINITELY makes it a more burdensome cognitive task for readers! But so much of psychology assumes that's worth it and now I don't think so
@grimalkina I am now thinking we need a "Things People Think About Surveys That Are Wrong" that list the various things people don't think about but still cause distorted survey information.
@grimalkina @wordshaper in all honesty, if a survey switches back and forth (positive/negative) I run out of cognitive steam and either don’t complete the survey OR stop reading the questions and circle things randomly. But I don’t realize at the time that I’m doing it…my brain just switches to automatic behaviour mode and it’s only later when someone asks a follow up question that I realize I have no idea what I answered or what the questions were.

@grimalkina @wordshaper Thanks for linking the article. Early in my career (late ‘70s) I worked on survey research and the importance of the positive/negative question method was drilled in, along with ordering effects and alternate wordings. The latter was designed to specifically address the fact that words are slippery and what you think might be a polar opposite may not be to someone else. The practical problem was/is that the various remedies end up growing the number of survey questions beyond reasonable attention of the subject. It is interesting to see these issues still coming up 45 years later! (P.s. My dad was a psychologist and Cronbach was a family friend. I was only 11 at that time (late ‘60’s) nevertheless, small world.)

My worry has always been about the quality of the statistical methodology and analysis in published studies. In my own area, 99% of the journal submissions I reviewed were essentially junk in this respect. So I applaud your attention to good science and statistical methodology,

@meltedcheese @wordshaper what a delightful reply to get on this topic, thank you for sharing!! Really made me smile to think of Cronbach as a real person and not just a familiar statistical rule! His and collaborators' work on validity made a deep impression on me especially when I started working in education. As never-solved as these issues are, I feel as a social scientist it's my obligation to try to stay current on behalf of the people sharing their experiences with me :)

@grimalkina Not sure if this is relevant, but I have adopted the practice of avoiding negative cases in conditionals in my code, no matter which language I am using.

I almost always test a positive condition against true, for example:

if (system_is_inactive == false)

Is a red flag for me, and I rewrite as:

if (system_is_active)

Keeping conditions always with the same "direction" in my code reduces my cognitive load when reasoning about it later :-)

@grimalkina
Interesting, but I wish they had some examples of the alternative positively formatted questions.