Larry Lessig should really stop speaking to the press. He routinely says things as bafflingly ridiculous as this. https://www.newyorker.com/magazine/2023/10/09/they-studied-dishonesty-was-their-work-a-lie “Trust me” is one of the biggest red flags you can ever utter.
All these Ariely stories remind me of the Innovator’s Dilemma guy, Clayton Christensen. He has his whole story about milkshakes, doing a job for truckers that has been told and retold in varying watts, and doesn’t seem to be based on reality. Nobody’s ever called him out for it. All of these behavioralists who consult seem to me to be myth makers.
Completely unrelated, I used to work alongside several pathological liars (either 2 or 4, depending on definitions; all 4 were liars), and you get to know the tells really really well. No story is ever told the same way twice even to the same person. When questioned, the teller comes up with an entirely different explanation instead of ever admitting the thing didn’t happen.
Listen, I was once friends with Mike Daisey. When shit came out, I was not “No, despite all evidence, Mike is absolutely an honest person,” because, you know, he was not. (In fact, when he started talking about Chinese manufacturing, I had a distinct whiff of ordure.)
One of the most remarkable things to me about all these studies that turn out to have misused data, etc., is that most researchers apparently keep incredibly poor records; there's no audit history for changes; Excel is somehow the gold standard. I guess that's true in business, but I somehow thought there would be university, journal, and professional standards that required very specific one-way archiving of data, etc.
@glennf There aren’t and it’s distressing.
@glennf 😬 🤐
@tankgrrl I am surely naive but after this many years, it's incredible
@glennf I literally cannot comment beyond "you're not wrong"
@glennf From my experience in industry, keeping intermediate copies of data is prohibitively expensive, even if you only keep one copy per day.
@JustinDerrick I don't think most research data in social sciences is bulky or changes day to day!

@glennf @JustinDerrick Social media research data? That can change by the hour.

It's certainly going to change faster than you can manually document it, depending on what specifically you're researching.

@AT1ST @glennf And you typically don’t back up just one or two files - it’s usually the whole PC.
@JustinDerrick @glennf To be fair, that's usually an industry thing - at least in universities, that would be prohibitively overkill for undergraduates, given per course, and I presume even for graduate classes, at most you want to limit the backup to an entire projects' folder.
@glennf It doesn't help that a lot of early research was pretty loose with records. More recently the idea of 'publish or perish' means record keeping is put in the too hard basket. How often in IT do we skip things if we haven't made it a core thing in our way. Like source control. And even then we find ways to blast through it.
@glennf Oh that would involve money. Establishing and maintaining the standards is expensive
@glennf
Every PI receiving a grant is their own little small business and it is a free for all. Depending on the nature of the grant, their may be some rudimentary training in ethics (i.e. a training grant), but most training on data retention, figure production, and writing is left to the PI. Some journals do have articles explaining acceptable image adjustments, but they are few and far between.
@glennf very little of what is published in IO psychology, organizational behavior, or HRM passes muster for real science. The questionable research practices abound. I came from the hard sciences into this domain, it has been 20 years, and I still can't get my head around it.
@glennf many people will freely admit that every paper has flaws. What they're not admitting is that nearly every paper is complete crap and made up
@glennf “university standards” is an oxymoron
@glennf The university standard for specific, one-way archiving of data last I was there was called "Final_Final_Final_V2".
@glennf I try to keep meticulous notes about what II do in my home lab. Nevertheless, all too often I start down a path and before long I wonder: Did I do "a" before "b" or was it the other way around? Or I go back to my notes and they're insufficient to replicate what I did. Of course I'm not planning to publish my notes, but I can see that it takes an incredible level of discipline to fully document things.
@glennf FYI, check out the RSE movement, Research Software Engineer(s|ing) an attempt to make research software and data more controlled. Too much software used in research is used once, then lost when it comes time to review. Or researchers leave an organisation and the data is orphaned.

@mattw @glennf

Even in industry, there are a lot of "researchers" who build prototype systems in unsustainable, irreproducible ways and then move on to the next new shiny without investing in reproducible builds, testing, or instrumentation,

leaving UX, QA, reliability and maintenance engineers to wonder "how did this ever work?"

@trochee @glennf True enough. IT is a bit ripe for abuse. Too expensive to do properly when starting up, then 5 years later your company accounts are still done from an Excel spreadsheet that’s “too hard” to replace with something else.
@glennf we're trying to improve things but there's a lot of work to do
https://osf.io/4pd9n/
@mdziemann @glennf Bioinformatics as a field has made big strides on code and data reproducibility.

@glennf you think researchers have time to do anything except writing grants?!

(i kid)

(but not really)

(okay i exaggerate)

(but not much)

@glennf Why would there be standards? There are absolutely no consequences for poor practice, and so people who spend time doing things properly are going to lose out. Only when funders insist will anything change.
@glennf
There are a few journals that actually check code for reproducibility - this is one of the pioneers: https://odum.unc.edu/archive/journal-verification/
Journal Verification - The Odum Institute - UNC Chapel Hill

JOURNAL VERIFICATION FRAMEWORK This framework is designed to help authors complying with the American Journal of Political Science (AJPS) or the State Politics and Policy Quarterly (SPPQ) data verification policy to prepare their materials for verification by offering best practices … Read more

The Odum Institute - UNC Chapel Hill
@glennf As long as about 2 overworked people in their unpaid spare time do the reviews based on which one unpaid editor has to make a decision in their spare time, without support from any support from people actually trained for data handling and such, this will not change. Some people will do what they can, others will find ways to put in less effort.
@glennf a lot of science is built on trust and repetition. We're often wrong about what we record and the key is whether another person can reproduce it using our methods and theory.... so the dismal record keeping is not as surprising as it looks I guess.
@glennf late to this party, but it also comes down to the personalities of the researchers. I work with several former post-docs and their normal way of working just doesn’t include thinking about documenting things.
- No comments in code (none at all, and it’s not particularly self-documenting either).
- No notes anywhere useful.
- Merge requests and commits that read “check-in”.
They’re wicked smart people, but we are not going to pass audits of our work if it ever comes to it.