🌖 關於 ICML 論文審查中違反大型語言模型(LLM)使用政策之說明
➤ 捍衛審查信任:ICML 如何透過技術手段查緝 AI 違規使用
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
隨著人工智慧融入研究工作流程,ICML 2026 針對審查過程中的 LLM 使用制定了嚴格的規範。為維護審查公平性,大會實施了「保守型(禁用 LLM)」與「寬鬆型(有限度使用)」兩種政策。針對選擇禁用 LLM 的審稿人,ICML 採取了隱蔽式水印技術,成功偵測出並懲處了違規使用 AI 生成審稿意見的行為,共導致 497 篇論文被直接拒收。此舉旨在強調學術誠信與信任的重要性,而非針對審稿品質本身。
+ 雖然技術手段存在被繞過的風險,但針對嚴重依賴 AI 進行「無腦複製貼上」的審稿人,這確實是一記警鐘。學術界確實需要更明確的 AI 使用準則。
+ 對於被拒收論文的作者來說這確實很不幸,但若審稿人連最基本的禁用 AI 協議都無法遵守,這反映出更深層的誠信
#學術誠信 #人工智慧 #同儕審查 #ICML 2026
On Violations of LLM Review Policies – ICML Blog

https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/

This is wild. #ICML let reviewers individually choose whether they want to work under a no-LLM policy or light-LLM-use policy. Those who chose the no-LLM policy received watermarked PDFs with hidden instructions to include specific phrases in LLM output. Using this technique, they caught almost 800 reviews that violated the policy *the reviewers had chosen themselves*! And this was just a conservative detection approach which fails if the reviewer slightly paraphrases parts of the LLM output.

On Violations of LLM Review Policies – ICML Blog

Ah, the prestigious #ICML, bravely tackling the earth-shattering crisis of AI-assisted reviews by rejecting a whopping 2% of papers! 🤖📄 Clearly, the #integrity of peer review hangs by a thread, as program chairs valiantly protect us from the existential threat of Large Language Models daring to assist. 😂 Bravo, ICML, for saving us from this apocalypse!
https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/ #AIreviews #PeerReview #LargeLanguageModels #HackerNews #ngated
On Violations of LLM Review Policies – ICML Blog

khazzz1c (@Imkhazzz1c)

작성자는 두 편의 논문이 ICLR 2026에 채택되었다고 알리며, 다음 목표로 ICML 2026을 언급하고 있습니다. 연구 성과의 학술적 인정과 향후 더 큰 컨퍼런스 도전을 계획하고 있다는 내용입니다.

https://x.com/Imkhazzz1c/status/2016490922498990354

#iclr #icml #research #papers

khazzz1c (@Imkhazzz1c) on X

Two papers have already been accepted by ICLR 2026 — time to aim for ICML 2026 next.

X (formerly Twitter)

This is an interesting policy change regarding author attendance. Much more inclusive, but will authors struggle now to justify their travel expenses? Would be interesting to see how this affects author participation.

From the ICML 2026 CfP https://icml.cc/Conferences/2026/CallForPapers

#icml #conferences

New paper accepted! In which circunstances can we use abundant proxy preferences to quickly learn true preferences? I'm glad to announce our paper explores and proposes a model for one of these cases. Check out more on Yuchen's thread in Bluesky https://bsky.app/profile/zhuyuchen.bsky.social/post/3lo4n2tspys2w . #ICML2025 #ICML
Yuchen Zhu (@zhuyuchen.bsky.social)

New work! 💪🏻💥🤯 When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at @icmlconf.bsky.social 2025. Fantastic joint work with @spectral.space, Zhengyan Shi, @meng-yue-yang.bsky.social, @neuralnoise.com, Matt Kusner, @alexdamour.bsky.social. 1/n

Bluesky Social

Добро пожаловать в CAMELoT

Большие языковые модели ( LLM ) сталкиваются с трудностями при обработке длинных входных последовательностей из-за высоких затрат памяти и времени выполнения. Модели с расширенной памятью стали многообещающим решением этой проблемы, но текущие методы ограничены объёмом памяти и требуют дорогостоящего повторного обучения для интеграции с новой LLM . В этой статье мы познакомимся с модулем ассоциативной памяти , который может быть связан с любой предварительно обученной LLM без повторного обучения, что позволяет ему обрабатывать произвольно длинные входные последовательности. В отличие от предыдущих методов этот модуль ассоциативной памяти объединяет представления отдельных токенов в непараметрическую модель распределения. Эта модель управляется динамически путём надлежащего балансирования новизны и свежести входящих данных. Извлекая информацию из консолидированной ассоциативной памяти, базовый LLM на стандартных тестах достигает лучших результатов. Эта архитектура называется CAMELoT ( Consolidated Associationive Memory Enhanced Long Transformer ). Она демонстрирует превосходную производительность даже при крошечном контекстном окне в 128 токенов, а также обеспечивает улучшенное контекстное обучение с гораздо большим набором демонстраций.

https://habr.com/ru/companies/first/articles/869632/

#CAMELoT #Машинное_обучение #ICML

Добро пожаловать в CAMELoT

Большие языковые модели ( LLM ) сталкиваются с трудностями при обработке длинных входных последовательностей из-за высоких затрат памяти и времени выполнения. Модели с расширенной памятью стали...

Хабр
Speaking of machine learning, I once had a paper rejected from #ICML (International Conference on Machine Learning) in the early 2000s because it "wasn't about machine learning" (minor paraphrase of comments in 2 of the 3 reviews if I recall correctly). That field was consolidating--in a bad way, in my view--around a very small set of ideas even back then. My co-author and I wrote a rebuttal to the rejection, which we had the opportunity to do, arguing that our work was well within the scope of machine learning as set out by Arthur Samuel's pioneering work in the late 1950s/early 1960s that literally gave the field its name (Samuel 1959, Some studies in machine learning using the game of checkers). Their retort was that machine learning consisted of: learning probability distributions of data (unsupervised learning); learning discriminative or generative probabilistic models from data (supervised learning); or reinforcement learning. Nothing else. OK maybe I'm missing one, but you get the idea.

We later expanded this work and landed it as a chapter in a 2008 book Multiobjective Problem Solving from Nature, which is downloadable from https://link.springer.com/book/10.1007/978-3-540-72964-8 . You'll see the chapter starting on page 357 of that PDF (p 361 in the PDF's pagination). We applied a technique from the theory of coevolutionary algorithms to examine small instances of the game of Nim, and were able to make several interesting statements about that game. Arthur Samuel's original papers on checkers were about learning by self-play, a particularly simple form of coevolutionary algorithm, as I argue in the introductory chapter of my PhD dissertation. Our technique is applicable to Samuel's work and any other work in that class--in other words, it's squarely "machine learning" in the sense Samuel meant the term.

Whatever you may think of this particular work of mine, it's bad news when a field forgets and rejects its own historical origins and throws away the early fruitful lines of work that led to its own birth. #GenerativeAI threatens to have a similar wilting effect on artificial intelligence and possibly on computer science more generally. The marketplace of ideas is monopolizing, the ecosystem of ideas collapsing. Not good.

#MachineLearning #ML #AI #ComputerScience #Coevolution #CoevoutionaryAlgorithm #checkers #Nim #BoardGames
Multiobjective Problem Solving from Nature

SpringerLink
🎉 Two papers from the #MachineLearning and #NLP teams @LipnLab were accepted to #ICML!
▶️ The paper "Delaunay Graph: Addressing Over-Squashing and Over-Smoothing Using Delaunay Triangulation" by H. Attali, D. Buscladi, N. Pernelle presents a novel graph rewiring method that incorporates node features with low complexity to alleviate both Over-Squashing and Over-Smoothing issues.
🔗 https://sites.google.com/view/hugoattali/research?authuser=0
Hugo Attali - Research

My research interests lie in how to quantify the role of graph topology in GNNs and to what extent we can improve the structural properties of the input graph to better exchange messages between layers.

TimesFM: A decoder-only foundation model for time-series forecasting

«Este modelo é baseado em modelos descodificadores pré-treinados num grande “corpus” de séries temporais composto por conjuntos de dados do mundo reais e sintéticos. Os resultados experimentais sugerem que o modelo pode produzir previsões precisas em diferentes domínios, horizontes de previsão e granularidades temporais»

  https://arxiv.org/html/2310.10688v2

#TimeSeries #Forecasting #ICML

A decoder-only foundation model for time-series forecasting