This blogpost's sources are completely challenging my pre-existing assumptions about the methodological argument for using reverse-coded items on a survey. I might have to let the evidence totally change one of my scientific practices despite my intuitive feeling of what's "right." NEAT!

https://yannicmeier.de/2026/03/03/why-reversed-items-can-be-problematic-in-survey-research/

Why reversed items can be problematic in survey research

In quantitative psychological research, questionnaires with Likert-style items are mostly used to assess variables like emotions, cognitions, and dispositions. Sometimes, it is possible to fall bac…

Yannic Meier
@grimalkina I saw this in the wild just last week. Looked for existing literature on a scale some colleagues had used and the key publication (not by the survey developer) basically said, the one reverse coded item here sucks and wrecks the scale’s psychometric properties. Sure enough, that was true in the dataset I’d been handed, too. I do suspect some failure on the part of the scale’s authors to interrogate whether their construct was actually unidimensional. I think if they’d stopped to ask themselves that, they’d have written a scale with more than one such item.
@emjonaitis good thoughts about the places where reliability and construct coherence slides 😬 I find the loading additional constructs problem very big and believable