Модель находит баг в криптографии, а криптограф узнаёт от неё новую математику

Эта статья — ответ на критику: «перестаньте рассказывать сказки, как AI помогает в науке, покажите примеры!». Действительно, без примеров, рассказы об успешном успехе AI выглядят как сектантский бред. В феврале 2026-го Google выложил на arXiv препринт на 151 страницу. Пятьдесят авторов из Carnegie Mellon, Harvard, MIT, EPFL и ещё дюжины институтов. Документ называется скромно: «Accelerating Scientific Research with Gemini: Case Studies and Common Techniques». Скромное название, но реально очень крутой контент. Препринты о возможностях AI выходят каждый день. Большинство — бенчмарки: модель набрала 94.7% вместо прошлогодних 93.2%, поаплодируем. Здесь же, вполне конкретные исследователи рассказывают, как они месяцами бились над открытой проблемой, а потом загрузили её в Gemini Deep Think — и магически получили решение. Или контрпример. Или указание на теорему из совершенно другой области математики, о которой они никогда не слышали. Некоторые истории оттуда заслуживают отдельного разговора. Интересно! Читать далее

https://habr.com/ru/companies/bar/articles/993300/

#Gemini #LLM #SNARG #zkSNARK #LWE #верификация_доказательств #дерево_Штейнера #reasoning #peer_review #Google_Research

Модель находит баг в криптографии, а криптограф узнаёт от неё новую математику

Эта статья — ответ на критику: «перестаньте рассказывать сказки, как AI помогает в науке, покажите примеры!». Действительно, без примеров, рассказы об успешном успехе AI выглядят как сектантский бред....

Хабр

#Peer_review is one of the key stones of trust in #SchollComm. This blogpost by a researcher about here experience being a reviewer for #MDPI is interesting for those considering publishing, reading or reviewing for such journals

https://deevybee.blogspot.com/2024/08/guest-post-my-experience-as-reviewer.html

My experience as a reviewer for MDPI

  Guest post by  René Aquarius, PhD Department of Neurosurgery Radboud University Medical Center, Nijmegen, The Ne...

If researchers are willing to use AI to design their experiments, why can’t that same AI be trusted to peer-review others’ results? The processes go hand in hand. Just read the text and sign “I approve it.”

#science
#AI
#peer_review

https://www.nature.com/articles/d41586-025-03506-6

Major AI conference flooded with peer reviews written fully by AI

Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.

op-ed: Academic Publishing Performs a Public Service

The peer-review process is thriving, not broken.

#academia
#peer_review
#NIH

https://www.wsj.com/opinion/academic-publishing-performs-a-public-service-peer-review-a6660a67

Here's a #peer_review question for you all: as a reviewer, upon viewing the other reviewer's comments, if you think there's something factually incorrect about it, what do you do? I suppose if it's significant enough, one might want to privately inform the editor. Anyway, curious to hear people's thoughts on this.
Cite-seeing and reviewing: A study on citation bias in peer review

Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer’s own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer’s work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.

More than 10,000 research papers were retracted in 2023 — a new record.

#information #overload #peer_review #fact_checking #retraction #science
https://www.nature.com/articles/d41586-023-03974-8

More than 10,000 research papers were retracted in 2023 — a new record

The number of articles being retracted rose sharply this year. Integrity experts say that this is only the tip of the iceberg.

The best peer review reports are at least 947 words

Based on an analysis of the relationship between peer review reports and subsequent citations, Abdelghani Maddi argues that longer and hence more constructive and engaged peer review reports are cl…

Impact of Social Sciences
When I react with annoyance at claims that #peer_review is completely broken because somebody managed to get a #ChatGPT generated paper into some for-profit open access journal, it may help to know that I just spent most of a day revising a manuscript based on 31 queries and suggestions and ca. 134 tracked change edits from the second round of review. No, serious journals would not have accepted that infamous rat paper. Instead, they say, "it should be 2.0–5.3 instead of 2-5.3", etc.
@heiseonline reports on post by @404media@barredo.work how #chatgpt seems increasingly to be the unfortunate answer by researchers to conduct #peer_review. Too many submissions and journals led to the #Peer_review_crisis

RT: https://social.heise.de/users/heiseonline/statuses/112206800507466462
Chair of Management & Digital Markets