Facing an...unusual...review for my PhD dissertation, I re-created their strangest argument nearly word for word and in the same order with my first one-sentence prompt to Copilot.
Head exploding.
Facing an...unusual...review for my PhD dissertation, I re-created their strangest argument nearly word for word and in the same order with my first one-sentence prompt to Copilot.
Head exploding.
@deevybee Definitely a conversation to be had (especially since dissertation reviewers are paid)!
There was very clearly something off about the review in question, however, beyond the strange, over-explain-y writing style: The many assertions the reviewer makes about what I get wrong are themselves wrong or miss the point or miss that I explicitly addressed them...which...doesn't that mean peer review could be turned into time spent combatting AI slop and hallucinations? How much wasted time is that going to add to researchers' plates?
(In this case, their weird arguments are thankfully easy to refute without doing a lot of digging, but still....)
So the student has used CoPilot to write his / her dissertation, I agree with the other comments here that is not a good thing. I do understand students are under both time and probably financial pressure to get things written on time but at this level one should surely be able to write properly.
That persons credibility HAS to take a major hit, I am not in academia but this looks like someone is just being lazy. I can understand people reading and doing their own, and getting an LLM to do it, for comparison. But to just use an LLM is clearly not very academically professional.
People pay a lot of money to universities and expect / deserve a better service.
@dymaxion
LLMs are next token predictors. If the produced result looks incoherent then it's a good sign that your thesis brings something new. So thumbs up!
I wish you the best dealing with "academia" AI-slop.
|Well first step is to be aware it is there, and how to recognise
Who is the "them" who is so lazy in this story?
@tootstorm
You may find it useful to refer to the ICMJE guidelines, which instruct reviewers to keep manuscript's confidential and forbid the use of AI by referees without permission. Why should dissertation reviewers have more latitude to use AI than journal referees?
@tootstorm An academic who got paid to skive off real dissertation analysis onto an LLM deserves to be keelhauled.
At the very least, the uni should claw back their pay.
Yikes!
Hope you have a chance to bring that back to the door of whoever shirked their job - without too much bother for you.
@tootstorm @vaurora I'm curious to hear how you deal with this and how well it works.
If it's procedurally possible, I sort of wonder what would happen if you just ignored that review.
@fivetonsflax @vaurora At this point, I am just responding to it. I will learn who it is later, and quietly shake my fist and be incredibly skeptical of anything they do and probably complain until I finally move on. Their review took much longer than is usual to get back to me, which has led to me just waffling in the system for too long without progressing towards graduation. By reporting that this review was conducted in bad faith using an LLM, it might mean the reviewer needs to be replaced and I then have to wait another 3-6 months to graduate.
I'm finishing my replies to it now, spending nearly two days addressing it, and I can say the entire review had, like, one or two OK-but-unnecessary suggestions for clarity. Everything else (~15 pages of replies at this point) was just slop-fueled nonsense, bizarre authoritative claims with no basis in reality; and when my response is anything more complex than a copy-paste of "This text has been peer-reviewed and cannot be changed," it usually involves copy-pasting the sentences immediately before and after whatever bombshell gotcha this person is throwing at me...because the answers are right there, they just didn't actually read it.
I can't imagine the toll this is having on actual peer review right now -- especially for those cases where the editor won't have the required expertise to gauge whether a seemingly authoritative review is just LLM vomit.