If you take a paper and get an LLM to review it, then get the LLM to rewrite the paper and write a reply to the reviewer, and then repeat, what happens? Convergence to something better? Cycles of arbitrary change that never converge? Descent into meaningless drivel?
@neuralreckoning my prediction is meaningless drivel. Using this as evidence:
https://www.reddit.com/r/ChatGPT/comments/1kbj71z/i_tried_the_create_the_exact_replica_of_this/