Out of a sample of 529 MTurk workers, only 14 are human https://journals.sagepub.com/doi/10.1177/17456916221120027
@thefirstred fantastic, thanks for sharing!
@thefirstred @casilli "Machines are actually humans who are actually machines"

@thefirstred

"With approximately 15,000 articles published on MTurk in the first 6 months of 2022 alone, the ripple effects of bad MTurk data are enormous: failure to find replications, erroneous effects, lines of research based on false information."

"Too Good to Be True: Bots and Bad Data From Mechanical Turk" by Margaret A. Webb and June P. Tangney 2022 https://journals.sagepub.com/doi/10.1177/17456916221120027

#MTurk #MechanicalTurk #science #ciència

@albertcardona

There’s some important context that’s missing from that (non-peer reviewed) paper:

https://psyarxiv.com/w7qy9

They claim ripple effects of bad MTurk data based on an n = 1 paper where much of the bad data reported was due to poor screening practices (including expecting people on mturk to complete a 45 minute study for $6 when most studies are 5 minutes or less)

@paolo_palma

To their credit, the authors add such a disclosure at the end:

"This article is not meant as an empirical assessment of the validity of all MTurk data; rather, it is an illustration of an individual experience. There is no way of knowing from these data alone what the true bound of validity is for all MTurk samples."

And Amazon did reimburse them.

As for payment: minimum wage is claimed, which likely means $7/hour in the US. If 5-min pays $6, that'd be far above minimum wage.

@albertcardona

The study was 45 minutes long, and information actual handling of the data was sparse. If we assume 200 participants reached completion, it means over 300 did not get compensated but were included in this analysis

@albertcardona A good chunk of these participants failed the consent quiz, which I am fairly certain that rates of failure would not be different among undergraduates, especially those who have completed a few intro psych studies, if they were not told beforehand
@albertcardona And I don’t really blame the authors for trying to get a publication out of this. The study was not peer reviewed and so the onus is on the editor
@albertcardona There are dozens of papers looking at the quality of online samples and making recommendations on how to improve them. This paper claims to not do the former, and does not do the latter, so it’s a bit weird to have been published in one of the field’s top theoretical journals without review.
@albertcardona (also Amazon reimbursed the authors, not the participants)
@thefirstred That article isn’t peer reviewed and was approved by an editor that had now resigned for poor editorial practices (including accepting commentaries without peer review). There’s a lot of issues with that paper and you can read it here https://psyarxiv.com/w7qy9

@thefirstred

TLDR: author pays subjects at paltry minimum wage; subjects behave as the author valued them.

Loud protestations by some on this thread aside, #MTurk is an ethical and methodological clusterf*ck, and that’s been known for some time.

@jamescookuma I hope this isn’t referring to me since I’m the only one on this thread I can see “protesting”

@jamescookuma @paolo_palma

Seconding Paolo, this is a single paper that has not been peer reviewed. Compare it to the multitude of papers assessing data quality that have been peer reviewed, and check the differences in methods. No need to make this opinion piece more salient that necessary

@thefirstred Wow, what a read. Looks like most of those respondents were actually mechanical!