2% of ICML papers desk rejected because the authors used LLM in their reviews
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
2% of ICML papers desk rejected because the authors used LLM in their reviews
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
I'm amazed that such a simple method of detection worked so flawlessly for so many people. This would not work for those who merely used LLMs to help pinpoint strengths and weaknesses in the paper; there are separate techniques to judge that. Instead, it only detects those who quite literally copied and pasted the LLM output as a review.
It's incredible how so many people thought it was fair that their paper should be assessed by human reviewers alone, and yet would not extend the same courtesy to others.
I'm not surprised at all. The ML research community isn't a community any more, it's turned into a dog-eat-dog low-trust fierce competition. So much more people, papers, churn, that everyone is just fending for themselves. Any moment that you charitably spend on community service can be felt as a moment you take away from the next project, jeopardizing the next paper, getting scooped, delaying your graduation, your contract, your funding, your visa, your residence permit, your industry plans etc. It's a machine. I don't think people outside the phd system really understand the incentives involved.
To be clear this is not an excuse but an explanation why I am not surprised.