Widely covered MIT paper saying AI boosts worker productivity is, in fact, complete bullshit it turns out.

https://www.wsj.com/tech/ai/mit-says-it-no-longer-stands-behind-students-ai-research-paper-11434092?st=sF3Wvo&reflink=desktopwebshare_permalink

@GossiTheDog Odds on that an AI was responsible for some of the writing and most of the data.

@GossiTheDog

An MIT spokesperson went on to say that they have no confidence in the veracity or reliability of journalistic institutions that repeat claims made in a student paper that has not undergone peer review.

@david_chisnall @GossiTheDog you would be surprised the amount of times I see research papers referring to arxiv articles (not necessarily reviewed, some are pre-studies, bogus AI crap or plain flawed experiments). The sad part is that we also assume publications are the truth once published, when in reality things change. Core to science is reproduction of results, independently. And that does not happen for 99% of them, it’s time consuming and funding is time limited. So there is no incentive to do so. Asking and pursuing answers to tough questions are deemed unproductive, a career is measured by how many papers you published, and that opens a whole new can of worms as “publish or perish”). This, and much more led me to leave academia and not look back.
@denzilferreira @david_chisnall @GossiTheDog And no one wants to replicate studies because no one wants to publish study replication. 🤦🏻‍♀️
@heartofcoyote @david_chisnall @GossiTheDog Reviewer 2: “Where is the novelty of this work? How is this different from what was already published at Z?” 🤦‍♂️ true story. Paper mills, creating papers out of already published work where the original authors are replaced for someone else and republished are also published in “new and exciting venues” all driven by gamifying a researcher career (h-index, i10), citation mafias (buddy cites buddy), and so on. It’s nasty. Good research is published in prominent and high impact journals and conferences. Stick to those.
@GossiTheDog This is total speculation, but I wouldn't put it past the AI techbros to pay some students to write some BS papers. 🤷 It's enough to have some papers out there you can cherry pick. They don't have to be valid, good, or peer reviewed. The revocation process for academic papers is even less effective than the one for TLS certs 😂 Most people outside academia just see that someone cites a paper and assumes the facts are good. If you are lucky someone checks wheter the paper actually exist.

@gilgwath @GossiTheDog

^this

even the dogforsaken antivax movement of today exists precisely because a *doctor was paid by VC pharmabros to write a BS paper*. Which has since been thoroughly debunked *and* eventually retracted, but as you say (and as we're constantly grimly reminded) the damage has been done.

there's no reason to expect this bunch of them bros to be any better, evidence seems to suggest they're even worse.

@maybenot @gilgwath @GossiTheDog I thought Wakefield did it of his own volition, his angle was that he would later market his own vaccines as safe and rake money in.

The scum is probably responsible for more deaths than Putin, Assad, George W Bush and Agathe Habyarimana combined.

@fazalmajid @maybenot @gilgwath @GossiTheDog AFAIK he had a deal with a company to produce those single purpose vaccines
@gilgwath @GossiTheDog Why pay them when you have college techbros drooling at the opportunity to simp for the industry for free?
@dalias @GossiTheDog I assumed that suiciding your academic credibility would be something people wouldn't consider without ample compensation, even if they didn't intend to go into classic academia. I projected myself ascribing value to my own integrity onto other people. My mistake 😞
@gilgwath @GossiTheDog
Pay? I thought that's what interns were for! /s
@gilgwath
And they're already paying thousands of low wage laborers to write content for them anyway, in the name of RLHF (human feedback to correct AI mistakes, in near real time).
@GossiTheDog
@GossiTheDog It is like putting lipstick on AI slop.

@GossiTheDog

#ALT4you

screenshot from the article. It reads:
MIT didn't name the student in its statement Friday, but it did name the paper. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets.
In a press release, MIT sait it "has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper."
The university said the author of the paper is no longer at MIT.

@GossiTheDog

FWIW, here's my take.

0: "AI" means three things nowadays: neural nets, machine learning, and LLM stuff. They are different things.
1: There was a paper in Science last year in which Materials Science types were doing some seriously kewl work on systems with 5 different metals using "machine learning" (gradient descent search in high dimensional spaces). And calling it AI.
2: The Econ. grad student didn't understand this and thought they were doing LLM stuff. Oops.

@djl

How are neural nets and LLMs not machine learning?

@GossiTheDog

@snarkweek @GossiTheDog

Machine learning is a field that uses statistics to do its thing. Its tools include neural nets but not LLMs. (I dislike the term "machine learning", but to the best I can tell, they're smart sensible folks, statisticians doing gradient descent in insanely high-dimensional spaces.)

Dunno how LLMs could be called "machine learning", since they're exactly and only random text generators.

@djl

But the Wikipedia for LLM begins with:
"A large language model (LLM) is a type of machine learning model designed "

Why is that wrong in your view? I'm not trying to gotcha you, this field is new and quite incomprehensible, so it irks me a bit when people reshuffle the categories I'm just learning.

@GossiTheDog

@djl

So how I see it, LLMs apply machine learning to lingual token probability.

@GossiTheDog

@snarkweek @GossiTheDog

Blokes building LLMs use machine learning.

Blokes doing machine learning don't use LLMs.

@djl @GossiTheDog

Probably an incorrect take. This looks more like deliberate fraud, rather than an econ PhD student making an honest mistake because of the WSJ citation of how this fraud, in Jan 2025, was brought to the attention of the two MIT professors who championed the lie by a "...computer scientist with experience in materials science (who) questioned how the technology worked, and how a lab THAT HE WASN'T AWARE OF (caps added) had experienced gains in innovation".
MIT is small enough that their star Nobel Econ laureate and any of his little army of econ PhDs could have easily checked with Materials Science. Straight up professional humiliating embarrassment.
Toner-Rodgers MIT second year student PhD web page was deleted by MIT. Signs point to an expulsion (fraud) not a suspension (honest mistake).

@pattykimura @GossiTheDog

"Probably an incorrect take"

Yep. I'm more irritated by inconceiveble stupidity than deliberate fraud, so that's where I go. But your:

"Straight up professional humiliating embarrassment."

is spot on.

@djl @pattykimura @GossiTheDog Fraud creates stupidity, stupidity licenses fraud. It's the virtuous circle of tech supremacism
@djl @GossiTheDog "AI" does not mean gradient descent. If you're doing gradient descent in high dimensions for some scientific purpose, awesome, call it that! Calling it "AI" tells us that you're willing to be an advertisement for scammers and pillagers for the sake of hyping your research.

@dalias @GossiTheDog

Cassandrich: agreed. Completely.

@GossiTheDog In case it is behind the paywall for some: https://archive.is/Mri5k
@GossiTheDog It is just a pre-print, and not peer-reviewed. Which is as good as throwing something onto a webpage for other scientists to look at. It got press traction, they ran with it, now that it seems to be bogus they want to clear their name. It tells you more about the institute than the scientific process or the student.
@GossiTheDog One can only hope this paper doesn’t become the basis of a larger non-sense movement like anti-vax.
@lasombra_br @GossiTheDog Too late, already has. The biz world is just agog over the potential wage savings aka increased profits they’ll reap by using AI.
@GossiTheDog
Ah, but they don't say -what- is wrong with the study. That's a shame.
#antiai
@vitloksbjorn the implication seems to be bad data, at least.

"If you want to be a bit snarky about it, you can alternatively think of AI as a very overconfident rubber duck that exclusively uses the Socratic method, is prone to irrelevant tangents, and is weirdly obsessed with quirky hats. Whatever floats your ducky."

https://hazelweakly.me/blog/stop-building-ai-tools-backwards/

Stop Building AI Tools Backwards | Hazel Weakly

I’ve been reading this week about how humans learn, and effective ways of transferring knowledge. In addition, I’ve also had AI in the back of my mind, and...

Hazel Weakly
@GossiTheDog they covered a fucking student arxiv preprint like if it was a peer reviewed pub in a respectable journal? WSJ have only themselves to blame here.
@GossiTheDog Rather awkward for WSJ and other media outlets.
@GossiTheDog I wonder if the author left the US and is now living in the UK. Possibly working as Andrew Wakefield's "houseboy". Apparently he brought his own sarong.
@GossiTheDog So you pulled his degree, right, MIT? Right?
@ZenHeathen @GossiTheDog did they get their undergrad degree at MIT? A first year getting expelled wouldn't have one for grad school yet. I can see them revoking an undergrad degree for fraud though...
@GossiTheDog nothing's more ironic than a paper on AI coming from a guy called AIden.:D
@GossiTheDog
Did they fired the „AI" who wrote that paper…
@GossiTheDog
Was this paper written with a worker productivity boost?
@GossiTheDog and who is surprised?
Silicon Valley is nothing but a hustle now, not for serious people.
@GossiTheDog I'm sure no-one is astounded.
@GossiTheDog So that's curtains for another of the very few examples every hypester constantly pull out (because there's so few of them) when trying to argue how useful #AI can be... squire, fetch my surprised face 😑
@GossiTheDog depends, I find it has helped vastly. Depends on how you use it.

@GossiTheDog

I mean, it was a hyphenated Aiden. The clues were right in the name to start.

@GossiTheDog

"The author of the paper is no longer at MIT."

What are the odds that has something to do with academic dishonestly?

@GossiTheDog I like how, despite the "we can't say anything due to privacy" statements, we can infer that the student was kicked out of the program for academic dishonesty since they were apparently a 2nd year PhD at the end of 2024 and are currently no longer at MIT.

Journalists should definitely take a lot of care when looking at arXiv papers. I only ever submitted there when my work was already accepted, but I've seen papers published there with rejection letters attached. It's wild.

@GossiTheDog every time I am at a dead spot when programming and try to use AI, it fails completely
@asdil12 Really, you are better off grabbing a 10 year old to help you. @GossiTheDog