2% of ICML papers desk rejected because the authors used LLM in their reviews
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
2% of ICML papers desk rejected because the authors used LLM in their reviews
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
To be clear, as the article says, these authors were offered a choice and agreed to be on the "no LLMs allowed" policy.
And detection was not done with some snake oil "AI detector" but by invisible prompt injection in the paper pdf, instructing LLMs to put TWO long phrases into the review. They then detected LLM use through checking if both phrases appear in the review.
This did not detect grammar checks and touchups of an independently written review. The phrases would only get included if the reviewer fed the pdf to the LLM in clear violation to their chosen policy.
> After a selection process, in which reviewers got to choose which policy they would like to operate under, they were assigned to either Policy A or Policy B. In the end, based on author demands and reviewer signups, the only reviewers who were assigned to Policy A (no LLMs) were those who explicitly selected “Policy A” or “I am okay with either [Policy] A or B.” To be clear, no reviewer who strongly preferred Policy B was assigned to Policy A.
I was thinking this too, but I don't believe this is the case, and I feel like it would not be a good idea either.
Most of these people are likely students; this should be a learning moment, but I don't think it is yet grounds for their entire academic career to be crippled by being unable to publish in a top-tier ML venue.
If this is tolerated, it sends exactly the wrong kind of message. The students, if they are, should be banned for life. Let them serve as an example for myriads of future students, this will be a better outcome in the long run.
This didn't trip for people who were merely bouncing ideas off a LLM, they caught people who copy and pasted straight from their LLM.